The Algorithm Babysitter: AI-Generated Content and the Emerging Human Rights Crisis in Early Childhood Development
- Human Rights Research Center
- 43 minutes ago
- 22 min read
Author: Olivia Weninger
December 30, 2025
![[Image source: MediaMister]](https://static.wixstatic.com/media/e28a6b_d717df057c04458fa2019f1cb3a9d5fe~mv2.png/v1/fill/w_49,h_33,al_c,q_85,usm_0.66_1.00_0.01,blur_2,enc_avif,quality_auto/e28a6b_d717df057c04458fa2019f1cb3a9d5fe~mv2.png)
Summary
Videos that fat-shame, stereotype, or place newborns in odd real-world situations are rampant in Instagram reels and TikTok feeds designed for children. Although such content is not conducive to childhood development, young children are consuming it in large quantities every day in this unregulated digital nursery.
Meanwhile, the rapid advancement of AI and generation of AI-created videos is adding another wrinkle to the situation. As the digital spaces for children are flooded with low-quality, AI-generated content that is often nonsensical, inappropriate, and laden with harmful stereotypes, engagement-driven algorithms amplify this "brain rot," prioritizing watch-time over developmental appropriateness.
Sitting a child in front of a screen with a child-oriented show or channel may provide short-term relief for the parents but create long-term problems for the child. Here, I present a case study that analyzes 30 minutes of TikTok algorithm-generated content from a popular child-focused media environment in the context of current literature on child development, screen time, technology risks, and AI ethics.
I argue that the proliferation of unregulated AI-generated content for children is not merely a parenting challenge but a significant human rights concern; it actively undermines a child's right to healthy cognitive and emotional development, protection from prejudice, and the ability to form a coherent understanding of reality. Therefore, I call for a multi-pronged regulatory solution.
Factors Conducive for Healthy Childhood Development
A child's early years are irrefutably their most formative; these years form the basis for much of their cognitive, social, and emotional skills. These years present a critical window, to which aspects of a person's adult lives can be traced back. The most formative years of one's life are those before entry to primary school, typically at age 5 or 6. These years have lasting effects on personality, social behavior, and intellectual capacity–a person reaches their intelligence potential by age of 4 (Akindele, 2012).
A well-rounded childhood that focuses on the value of reading, play, and parent-child interaction creates a healthy developmental environment for those important years. Such an environment allows a child to feel grounded and secure enough to be willing to venture out and learn on their own.
Reading, a foundation for a child's education, is a basic life skill that will continue to be cultivated and is a cornerstone for a child's success both in school and in life (Akindele, 2012). An insufficient ability to read properly limits a person's job opportunities and personal fulfillment. A child with an awakened interest in reading is more likely to push for future academic excellence (Akindele, 2012). Reading also allows a child to develop empathy and creativity by placing themselves in the shoes of fictional characters.
Additionally, the importance of play in the development of a child cannot be understated; its purposes are much more multifaceted than one may think. Play is a learning experience that rehearses emotional skills as well as cognitive ones. While playing, children learn how to express their anxieties and cope with them. Play-acting creates the foundations for a child's coping skills, as they imagine themselves in different situations, from the beginning of the problem to its completion. As their coping skills improve, those situations and stimuli harbor less anxieties as they feel more confident in their abilities (Cohen, 2018). Play also helps a child understand themselves and the world around them, test their cognitive and manipulative skills, develop their imaginations, and to learn how to work as a group in group directed play. Pretend play is ultimately inextricably linked to one's sense of identity. Without play, a child would be far more one-dimensional and behind their peers in many developmental categories (Cohen, 2018).
A child's relationship with their parents is another massive contributor to their development and achievement in education and later in life. While children are naturally curious, their education and growth are ultimately in the hands of their caregivers; their skills need to be attentively nurtured. Parents are the first educators of a child, from a newborn maintaining eye contact with them to recognizing that certain words produce certain effects. The parent conversing with and listening to the child creates a sense of trust between the child and the parent. As a result, the child feels comfortable enough to verbally express their thoughts, from small questions to big feelings. Conversely, children who have developed poor verbal skills in the first three years tend to do poorly in school and are at risk of developing antisocial behavior as teens (Akindele, Nadia).
The importance of reading, play, and parent-child relationship in child development cannot be overstated. Technology may have its appropriate uses and benefits. However, replacing any of the factors with it, especially technology that is meant to influence a child's dopamine regulation and keep them scrolling, can be irreversibly detrimental to a child's long-term fulfillment. As the Chair of the CPS Digital Health Task Force, Dr. Michelle Ponti, put it, "We need to prioritize school activities, physical activity, sleep, and social activities like family meals before reaching for a device" (Vogel, 2019).
In 1999, the American Academy of Pediatrics (AAP) recommended no more than two hours of screen time a day for children aged two and older and zero screen time for children under two (Daugherty, 2014). The AAP and the White House Task Force on Childhood Obesity have continued to promote the under two-hour guidelines due to the negative effects of technology use on a young child's attention, behavior, focus, weight, social development, and language development (Daugherty, 2014). In addition, too much screen time can increase existing symptoms of the autism spectrum disorder (ASD), such as social withdrawal, difficulties with communication, and hyperactivity (Capanna-Hodge, 2023), since screen time interferes with the development of children with ASD, including communication, social interaction, and the development of language skills.
While the initial research focused on the quantity of screen time, later research finds that the quality of screentime also significantly affects a child's development. In other words, the type of content, the way in which it is utilized, as a group-bonding experience or a solitary scrolling hole. Screen activities, such as group games or watching videos, are much healthier for a child's social development than passive and solitary ones.
Current Digital Environment for Children
In the past decade, CoComelon, a platform whose audience is largely children aged 2-4, has become an international phenomenon and the culprit of a large amount of recent child screen time. While CoComelon officially launched on YouTube in 2006, it didn't reach its claim to fame until 2018. CoComelon features animated characters, which sing and dance to different songs and nursery rhymes. By 2020, CoComelon was the most-watched YouTube channel in the world, averaging over 3.5 billion views per month (Los Angeles Times, 2024). During the COVID-19 pandemic, CoComelon was a lifeline for parents who wanted a break from providing constant stimulation to their children. TikTok videos of the "iPad kids" and children loudly watching CoComelon in public became a viral phenomenon, as these kids couldn't get enough. The average audience for CoComelon watchers is kids ages 0 to 4, and as aforementioned, half a person's intellectual potential is reached by age 4, or the entire age range of the typical CoComelon audience. Entertainment industry strategists have mentioned that the market for kids' entertainment is "massive", and that its potential is largely untouched (Los Angeles Times, 2024). This is evidence that this preying on the minds of children is intentional and that they are actively choosing potential for money and market growth over the development of a growing human's brain and social skills.
Much like a drug, the fast-paced music and bright colors in CoComelon videos create a release of dopamine in the child, leading to the addictive nature. Due to its hyper-stimulating and addictive nature, parents and experts have expressed concern over its widespread and frequent usage as an alternative to social interaction or parent-child bonding (Capanna-Hodge, 2023). Over time, those children become dysregulated and may even have difficulty playing or engaging in creative activities without the presence of the show.
Coupled with the bright colors and quick themes popularized by CoComelon, the rise of similarly hyper-stimulating content on TikTok and other apps to hold a person's attention for long periods has resulted in the concept of "TikTok Brain". This is a phrase used to describe changes in focus, attention span, and overall cognitive function that results from the frequent use of TikTok and similar scrolling apps (ReNu Counselling & Psychotherapy, 2024). These rapid, short videos have a unique way of enticing users to stay by consistently manufacturing the brain to produce dopamine. The instant gratification, in the form of consistent, tiny pulses, can make it extremely difficult for an adult to pull away; for a child who has yet to formulate their own impulse control, this separation can be nearly impossible.
Over time, the consequences of long term short-video watching include mental health issues, decrease in brain function, loss in ability to concentrate on multi-step tasks that require sustained attention, along with physical side effects such as sleep-deprivation and obesity (ReNu Counselling & Psychotherapy, 2024). The negative side effects of TikTok and similar apps are still being researched today and could prove to have more harmful long-term effects than previously thought, especially on young developing brains.
CoComelon and TikTok alone can do some irreparable damage in a child's development if not carefully regulated. These apps and shows create screen-time demons, making it more difficult to pull the iPad away as the addictive programming further requires more, quicker dopamine releases. Even before the widespread use of generative AI, our technology has been designed to keep us involved and wanting more.
Concurrently, AI algorithms have made large strides in their accuracy and ability to manipulate dopamine. AI-generated videos have also gained in popularity as apps to create this kind of content are now fully accessible to the public. This rapid integration of unregulated AI-generated content into children's media ecosystems, amplified by engagement-driven algorithms, poses a significant threat to the cognitive, emotional, and social development of young children.
Case study: Methodology
By writing this paper, I wanted to examine what unmonitored screen time creates in the algorithm of a young child, in the context of child development, AI, and ethical concerns. I started with creating a new TikTok account and disabling any indications of my location, age, and race. Then, I played CoComelon videos on TikTok, allowing it to play all the way through full episodes or autoplay to the next CoComelon short for 90 minutes. After 90 minutes, I was able to turn my attention to the premise of this experiment: the For You page and algorithm-generated content. I watched each video on the For You page for 7 to 8 seconds without skewing the algorithm. I skipped the sponsored content and saved/bookmarked every non-sponsored video; this way, I could go back and view the videos in the order they were played without researcher confirmation bias instead of only saving those I felt most pertained to my research.
Case study: the Findings
Out of the 80 videos that I saw in 30 minutes, 72 of the 80 were AI-generated. The remaining eight included a Labubu in a blender, the daycare child tragedy, a guy at the pool aura farming, two videos of the Chinese teacher teaching profanity, CoComelon characters getting thrown into a mop bucket, a group of girls doing a ritual to pass their exams, and a mom upgrading her toddler's bathroom to all Bluey character products. That means that the overwhelming majority of videos that are sent to these kids through this algorithm had little to no human thought, creativity, or editing in material or language matters.
Regarding the tone of the videos, they started with a variety of AI animals and fruits designed to look like and mimic human babies; then, the content unexpectedly became more negative, including videos on fat-shaming, brain rot, and blatant racism. The negative content fell into four categories: the proliferation of vicious stereotypes, inappropriate and adult-themed content, the normalization of violence and distress, and the uncanny and nonsensical. The effect of these content can have lasting impacts on children.
Racial Stereotyping

The first, and perhaps the most blatantly obvious theme, was the proliferation of racial stereotypes. There were multiple different versions of the Jet2 holiday theme, with Indian, Vietnamese, and African versions of AI babies singing the jingle in horrendously exaggerated accents. There were two Black AI babies discussing going back to the "cotton farms" along with numerous other African American stereotypes and jokes. There was an Asian baby named "Bing Ding Ling" who's favorite food was cats. There were two videos about an Indian kid talking to his parents, one saying that curry will not be disrespected in their house and the other saying he wanted to be a doctor, but his father retorted that he "will be an uber drive or a scammer like every other man in this family".

This is the most damning finding, providing clear evidence of AI models laundering and amplifying societal biases and presenting them as entertainment to the most impressionable minds, directly violating the right to be raised free from prejudice. In most instances, prejudice is transferred from parents to children through upbringing. A child's racial prejudice is created from a mix of societal and contextual factors. Children are highly social learners who observe their environment and absorb the patterns, attitudes, and norms they find prevalent (Misch, 2022). Now, there is the risk of prejudice being transmitted regardless of parental relationship and influence.
Adult-themed content

The second outstanding theme of the videos was the presence of inappropriate and adult-themed content. There were many instances of profanity, including from well-known figures beloved by children, such as Elmo. There were four instances of AI princesses and female villains, singing lines that began with the phrase "hey big back", and ended with a variety of body-shaming comments including "your waistline is looking like the Brooklyn Bridge", "how many times have you been to the fridge today", "you've got more rolls than a bakery, time to hit the gym", and "double chin is deep like the Grand Canyon".

There were also sexual references in some of the conversations between AI babies, including one about a mother and finding a "slimy cucumber". There are rapping newborns saying they are looking for the doctor to put them back in [their mother], as well as saying that their parents are broke and they don't like it here. In two videos, there was a Chinese instructor teaching swear words and phrases. There were references to "Diddy parties", referencing P. Diddy and his sexual assault allegations, and vans full of candy that the children got into. The algorithm fails to distinguish between child-friendly aesthetics, such as the cartoons and babies, from the adult themes, which expose children to profanity and cynicism that can easily take root in a developing mind.
Adultification describes the exposure of youth to adult knowledge and themes. These themes can be related to sex, violence, and profanity, some of which was found in the course of these videos. The predictors of a lack of parenting skill, family closeness, adolescent psychological availability, and lack of autonomous decision making were strongest when related to adultification present in childhood (Bernard, 2010). Adultification is negatively related to the outcomes of living autonomously in adulthood, being able to withstand the transition to emerging adulthood, and number of sexual partners. Furthermore, it was positively related to smoking marijuana and binge drinking (Bernard, 2010). Being exposed to too much adult content too young can create a breeding ground for bad habits and can make the natural curiosity of a child destructive.
Normalization of Violence and Distress

There was also significant normalization of violence and distress in the videos. There were odd and uncanny valley videos of CoComelon characters getting pushed into a mopping bucket and spun around and a Labubu doll getting thrown into a blender and made into a shake. There were multiple videos of real-looking AI babies crying, including one that was walking the streets in a diaper, calling his grandma on the phone saying that he missed her and asking where she was. The most horrendous of the videos found in this short 30-minute period was a video of a real story about a child who had died at daycare due to the negligence and abuse of the person working the daycare. The algorithm mixes cartoonish "violence" with real-world tragedy, blurring lines and desensitizing children to real suffering.

Videos with extreme violence or instances of death like the daycare video can slowly begin to desensitize a child as they become accustomed to the shock. Some research indicates that age is a modifier in this as well, with younger children having a tendency to internalize symptoms, causing more problems later, while older children tend to respond with externalizing behaviors (Kennedy, 2016).
Nonsensical Content

The last theme was the uncanny and nonsensical. While it is odd and is still ill-fitting in the mind of a young child, these constitute videos that would fall in the "brain rot" category of short time video amusement. Examples include an AI baby, chicken, or banana dancing to a song that repeats the phrase "chicken, banana" repeatedly. Videos of a baby that looks like a fruit loop sitting in a bowl of milk eating fruit loops or videos of babies that look like fruit eating the same fruits they are meant to mimic while babbling were also very common at the beginning of the 30 minutes. There are ASMR plastic Elmo characters getting liquified and spread on toast, along with spinning fish and loud blaring music. This content is designed for pure sensory stimulation, and it lacks any sort of narrative, logic, or educational value. It conditions the young brain for very passive, rapid-fire consumption of information lacking in any clear cognitive benefits. Following the apt name of "brain-rot", this category would constitute the clearest example of dopamine receptors being placed in overdrive.
The New Predator: Generative AI and the Algorithm Threat
Apps such as TikTok use algorithms to tailor a viewer’s feed with every like, comment, and logged time spent on each video, creating concern over the intentionally addictive nature and the long-term effects of prolonged scrolling. The nature of these machines relies on human biases and are largely unpredictable by nature. AI models are trained with all the information it can find created by humans over a certain time, nicely cataloged in every niche website found on the web. In other words, AI models are trained in vast and biased internet data and can produce work that is just as biased or worse. Sometimes the algorithm may see that someone liked a video with racist or sexist undertones and then send the viewer another video with a slightly stronger message.
AI insiders such as Trevor Calalghan, ex-employee at DeepMind, Elon Musk, worldwide notable advancer of AI in technology, and even the father of cybernetics, Nobert Wiener, have all expressed their concerns over the technology (LaGrandeur, 2020). Facebook's head of AI, Jerome Pesenti, tested these LLMs and algorithms, plugging in words such as Jews, black, women, and holocaust. The AI generated tweets and content contained phrases such as "jews love money, at least most of the time" and "the best female startup founders are named…girl". These generated tweets are superficial, blatantly problematic, and social cannon fodder that can affirm people's negative views of the world around them. This is the type of generated content that makes its way onto TikTok, and at times, into the minds of children.
TikTok algorithms would be an example of letting go and enabling trust in algorithms and AI technology. A viewer who is endlessly scrolling does not actively choose the content they are watching. These algorithms foster trust by providing reliability—their videos become funnier and more attuned to the viewer's likes as they analyze every potential data point that comes with their interaction with the app. One forgets that there is a methodology to which videos get shown to whom, and that there are a million factors that play into capturing the attention of millions. This is a sort of blind trust on the part of the viewer—forgetting that they are making no active choices in the content they are consuming. Such trust aids several critical problems in the digital age, including the proliferation of periphery groups in the mainstream, polarization of society in politics, spread of misinformation, and above all, phone addiction.

A viewer's reliance and trust and a platform's unconscious pushing of content can produce much more harmful and lasting effects on a young child, whose personality is still elastic and intellectual capacities not yet realized. Many adults may remember the news phenomenon only a little over a year ago, in which many were duped over this AI-generated picture of a young girl wearing a vest holding a puppy in the aftermath of Hurricane Helene. This was one of the first vastly recognized images produced by AI that was believed by many on Facebook and other platforms to be real, starting political fights online over the teary-eyed daughter.
Discussion: A Human Rights Framework for a Digital Crisis
The findings of the case study highlight some of the themes and theories previously presented. For example, brain rot directly increases a child's probability of being hyperactive, anxious, and less likely to listen as their dopamine receptors are constantly being manipulated due to consistent scrolling and time spent on an app. Also, stereotyped and adult content actively sabotages healthy social-emotional learning and ingrains prejudice at a critical young age, when intellect, emotion, and empathy begin to take root in a child. This mix of AI-generated content and compulsive scrolling come together to create a perfect storm: compounding developmental damage. While the effects are not wholly irreversible, these effects may stay with a child and well into adulthood.
Especially when compounded with the amount of profit that investors may see in keeping children attached to this short-term media, one can clearly attribute these phenomena to violations of the rights of the child. There are a few Articles in the UN Convention on the Rights of the Child that pertain to these violations of children’s rights and how it may relate to this study. According to Article 29, parties agree that the education of the child shall be directed to the development of the child's personality and abilities, the development of respect for human rights and fundamental freedoms, and the preparation of the child for responsible life in a free society in the spirit of "understanding, peace, tolerance, equality of sexes, and friendship among all peoples, ethnic, national and religious groups and persons of indigenous origin", among others. The algorithm-produced content consumed by children undermines the development of the children's personality, talents, and respect for human rights and cultural identity.
Article 19 states that "parties should take all appropriate legislative, administration, social, and educational measures to protect the child from all forms of physical or mental violence, injury or abuse, neglect or negligent treatment, maltreatment or exploitation,…… while in the care of parents or legal guardians". If social media and apps like TikTok are used for a parent to be able to forgo the responsibility of playing with, enriching, or communicating with their child, then this use of TikTok and other similar short-form apps could fall under a violation of Article 19. This is especially the case if their access and freedom on the apps is unlimited, and content is not censored or monitored at all; this exposure could constitute a form of psychological or emotional neglect/abuse. This falls on the parents, society at large, and governmental policies and regulations to mitigate the negative effects.
Lastly, as with any form of media, it is specifically tailored to keep a viewer's interest and attention for as long as possible in the name of profit. As mentioned earlier, many tech execs and social media CEOs view the child entertainment market as largely untapped and having much potential for future profit. This profit is made by preying on young, impressionable children's minds in the years that are most necessary for their future success and development. Article 3 of the UN Convention states that in all actions that concern children, be that courts of law, legislative bodies, education, etc., that the best interests of the child shall be the primary consideration. States should ensure that the child has such protection and care that it is necessary for their well-being and that individuals should take all appropriate legislative and administrative measures necessary.
We must make it known that the algorithm is optimized for engagement and profit and not made in the child's best interests. The most effective solutions to this ever-growing problem would be to create regulatory models and quantify and define these actions and intentions. However, there is a long-standing failure of existing regulatory models regarding technology, largely due to its insanely quick growth and widespread usage. Kevin LaGrandeur (2020) mentions three levels of regulation that are necessary for the regulation of technology and child usage.
The first is at the personal, or parental, level. Parents are the first line of potential restrictions and guidance on the matter, as they are closest to the child and spend the most time with them. The children's screen time is almost directly correlated to their parents—from how many screens they have to what kind of regulations or rules the parents have in the household regarding technology. Lately there has been a strong trend of parents often being unaware or overwhelmed by this technology, or adversely, the tendency to use screens as a "babysitter"; as a result, they are often an unreliable first line of defense (Kneteman, 2019).
The second level is corporate self-regulation. This is a moral conversation that has always been a topic of debate and has gotten far worse over the years as we see unabated capitalism and monopolies grow rampant, with higher-ups consistently choosing profit over the betterment of its consumers, the environment, and society as a whole. Therefore, corporate self-regulation is clearly failing, as profit motive vastly outweighs ethical responsibility.
The last level is government regulation. Government regulation does not have the legal precedent with technology and the rapidly increasing AI and LLMs as of yet, and so our public policy and regulations do not accurately reflect the needs of the population. This government regulation is dangerously slow and lagging far behind the increasing rate of technology.
Final Analysis and Recommendations
In our inherently technological world, it is unrealistic to exclude technology from the life of a young child, at home or school. Technology can even be a wonderful aid for childhood education, producing better and more active ways to impart information.
However, we are in the middle of a vast, uncontrolled experiment on the developing minds of a generation, with early evidence pointing to catastrophic developmental consequences and human rights violations. I argue that the proliferation of unregulated AI-generated content for children actively undermines a child's right to healthy cognitive and emotional development, protection from prejudice, and the ability to form a coherent understanding of reality, all of which are crucial to a healthy life after childhood.
In organizing a call to action, we must have a multi-layered approach to digital guardianship. Parents and educators are step one in making sure that the content consumed by children is aiding their overall development. For digital literacy, public health campaigns should be utilized to educate parents about the specific dangers of AI-generated content. Parents should pay attention to CPS advice to be present and engaged during screen time, making sure that there aren't too many hours of unregulated and unrestricted screen time. Lastly, parents and educators should demand better for their children. They should see what the entertainment and social media industry is doing to their children, purposefully preying on their attention and young brains for gain. Real change starts at the most basic level, and the most basic level consists of those closest to the child.
Secondly, technology companies must make the conscious decision to choose ethics and our societal future over immediate profit. According to LaGrandeur (2020), there are 8 key themes for ethical AI development that should be explored in policies: privacy, accountability, safety and security, transparency and explainability, fairness and no discrimination, human control of technology, professional responsibility, and promotion of human values. Tech companies would have to choose to radically overhaul algorithms for children's content to prioritize developmental appropriateness over engagement metrics. The companies and government should invest in moderation specifically trained to identify and remove harmful AI-generated content, especially stereotypes and adult themes.
Lastly, governments and human rights organizations must get involved in their own ways. We should call for a regulatory body for AI content in a similar fashion to the FDA for medicine or the FAA for aviation, with the power to audit algorithms and ban or restrict harmful content types. We should fund research and commission long-term neurological and psychological studies on the effects of this content on children. Additionally, we should redefine the term "harm" in relation to children and technology, and work to legally define exposure to manipulative, stereotype-laden AI content as a form of child endangerment.
We have a chance to begin the fight and outreach now for results that could reverse the harm already occurring. The choice here is not between technology and no technology; it is between a digital world that nurtures young minds and one that exploits them for profit. Failure to act decisively now is a failure of our most fundamental duty to protect the next generation.
Glossary
AI-generated content – Media (video, audio, images, text) created wholly or partly by artificial-intelligence systems rather than by a human creator. In the study it refers to the short clips that appeared on TikTok after the CoComelon "seed" videos.
Adult-themed content – Material that includes profanity, sexual innuendo, or other mature references that are inappropriate for the target age group (2-4 years). The paper documents several such instances that slipped through the child-focused feed.
Algorithm – A set of computational rules that decides which pieces of content are shown to a user and in what order. TikTok's “For You” feed is driven by a proprietary algorithm that optimizes for watch-time and user engagement.
Brain rot – A colloquial term the author uses to describe the cognitive drowsiness and reduced attention that results from rapid, dopamine-driven scrolling through low-quality, AI-produced videos.
CoComelon – A popular YouTube/ TikTok channel that delivers bright, fast-paced nursery-rhyme videos aimed at children aged 0-4. It was used as the “seed” content to prime the TikTok algorithm in the case study.
Digital literacy – The ability to critically evaluate and understand digital media. The recommendations call for public-health campaigns that teach parents and educators to recognize harmful AI content.
Dopamine – A neurotransmitter associated with reward and pleasure. The paper highlights how hyper-stimulating videos trigger dopamine releases, reinforcing repeated viewing.
Engagement-driven algorithm – An algorithm that prioritizes metrics such as total watch-time, re-watches, and shares over any measure of developmental appropriateness.
Human rights framework – The set of international legal standards (chiefly the UN Convention on the Rights of the Child) that the author invokes to argue that unregulated AI content violates children’s rights to healthy development, protection from prejudice, and an accurate sense of reality.
Stereotype amplification – The process by which AI models, trained on biased internet data, reproduce and even exaggerate harmful stereotypes (e.g., racial accents, gendered body-shaming). The case study lists several examples that appeared in the 30-minute sample.
UNCRC Article 3 – “The best interests of the child shall be a primary consideration in all actions concerning children.” The paper uses this to critique profit-driven content that harms development.
UNCRC Article 19 – Calls for protection of children from all forms of mental violence, including exposure to harmful media. The author suggests that unrestricted AI-generated content can constitute a breach of this article.
UNCRC Article 29 – Requires education to develop respect for human rights, tolerance, and equality. The paper argues that stereotyped AI videos undermine these goals.
Uncanny valley – The psychological discomfort people feel when they see something that is almost, but not quite, human-like. The study notes that many AI-generated babies and characters sit in this unsettling zone, which can be confusing for children.
References
Abel, Allen. “P Is for Prejudice: Psychologist Frances Aboud Has Found That Children as Young as Four Show Signs of Racism. Could the Urge to Discriminate Be in Our Genes?” Saturday Night (Toronto, Ont. : 1963), vol. 116, no. 24, 2001, p. 32.
Akindele, Nadia. “Reading Culture, Parental Involvement and Children’s Development in Formative Years: The Covenant University Experience.” Library Philosophy and Practice, 2012, p. 1.
Bernard, Julia Margarita. Exploring the Predictors and Outcomes of the Adultification of Adolescents. no. 5, 2010. ProQuest Dissertations & Theses.
Budnik, Christian. “Can we trust artificial intelligence?” Philosophy & Technology, vol. 38, no. 1, 24 Jan. 2025, https://doi.org/10.1007/s13347-024-00820-1.
Capanna-Hodge, Dr. Roseann. “Screen Time, Autism and CocoMelon.” Medium, Medium, 21 Mar. 2023, medium.com/@drroseanncapannahodge/screen-time-autism-and-cocomelon-566c7740d822.
Chow, Andrew R. “CHATGPT’s Impact on Our Brains According to an MIT Study.” Time, Time, 23 June 2025, time.com/7295195/ai-chatgpt-google-learning-school/.
Cohen, David. The Development Of Play. Fourth edition., Routledge, 2018.
Cost, Katherine T., et al. “Patterns of Parent Screen Use, Child Screen Time, and Child Socio‐emotional Problems at 5 Years.” Journal of Neuroendocrinology, vol. 35, no. 7, July 2023, pp. 1–11. EBSCOhost, https://doi.org/10.1111/jne.13246.
Daugherty, Lindsay. Moving beyond Screen Time : Redefining Developmentally Appropriate Technology Use in Early Childhood Education. RAND Corporation, 2014.
Dergaa, Ismail, et al. “From tools to threats: A reflection on the impact of artificial-intelligence chatbots on Cognitive Health.” Frontiers in Psychology, vol. 15, 2 Apr. 2024, https://doi.org/10.3389/fpsyg.2024.1259845.
“How ‘cocomelon’ Became a Mass Media Juggernaut for Preschoolers.” Los Angeles Times, Los Angeles Times, 13 Nov. 2024, www.latimes.com/entertainment-arts/business/story/2024-11-13/how-cocomelon-became-mass-media-juggernaut-preschoolers.
Jain, Lakshit, et al. “Exploring problematic TikTok use and mental health issues: A systematic review of empirical studies.” Journal of Primary Care & Community Health, vol. 16, Mar. 2025, https://doi.org/10.1177/21501319251327303.
Kennedy, Traci M., and Rosario Ceballo. “Emotionally Numb: Desensitization to Community Violence Exposure Among Urban Youth.” Developmental Psychology, vol. 52, no. 5, 2016, pp. 778–89, https://doi.org/10.1037/dev0000112.
Kneteman, Lindsay. “SCREEN DEMONS: Kids Love Screen Time, and Parents Love a Few Quiet Moments. But Why Does It Seem like Tablets and Televisions Make Kids’ Behaviour Way Worse?” Today’s Parent, vol. 36, no. 6, 2019, p. 30.
LaGrandeur, Kevin. “How safe is our reliance on AI, and should we regulate it?” AI and Ethics, vol. 1, no. 2, 6 Oct. 2020, pp. 93–99, https://doi.org/10.1007/s43681-020-00010-7.
Misch, Antonia, et al. “The Developmental Trajectories of Racial and Gender Intergroup Bias in 5- to 10-Year-Old Children: The Impact of General Psychological Tendencies, Contextual Factors, and Individual Propensities.” Acta Psychologica, vol. 229, 103709, 2022, https://doi.org/10.1016/j.actpsy.2022.103709.
“Tiktok Brain: Understanding the Impact on Modern Attention Spans.” ReNu Counselling & Psychotherapy, 5 Sept. 2024, renucounselling.ca/tiktok-brain/.
Virós-Martín, Clara, et al. “Can’t stop scrolling! adolescents’ patterns of TikTok use and digital well-being self-perception.” Humanities and Social Sciences Communications, vol. 11, no. 1, 30 Oct. 2024, https://doi.org/10.1057/s41599-024-03984-5.
Virós-Martín, Clara, et al. “Can’t stop scrolling! adolescents’ patterns of TikTok use and digital well-being self-perception.” Humanities and Social Sciences Communications, vol. 11, no. 1, 30 Oct. 2024, https://doi.org/10.1057/s41599-024-03984-5.
Vogel, Lauren. “Quality of Kids’ Screen Time Matters as Much as Quantity.” Canadian Medical Association Journal (CMAJ), vol. 191, no. 25, 2019, pp. E721–E721, https://doi.org/10.1503/cmaj.109-5767.
