Skip to content

The use of chatbots continues to rise

  • Blog

As the use of chatbots continues to rise, they have become a popular instrument when interacting with technology. ChatGPT and the upcoming GPT-4 are designed to simulate real conversations and automatically respond to user inquiries. While chatbots serve various purposes like customer service and personal assistance, they also raise certain concerns, particularly when it comes to young users. In this blog post, we will explain potential risks that chatbots might pose to children and propose potential solutions.

One of the primary dangers associated with chatbots is the possibility of encountering inappropriate content. Despite being trained with vast amounts of data to provide accurate responses, there is always a chance that chatbots may fail to filter out improper or offensive material. This concern is particularly significant when it involves children, as they may lack the critical thinking skills to identify unsuitable content or recognize when something is inappropriate. Consequently, chatbots could unintentionally expose kids to objectionable material, which can have harmful effects.


Another potential risk associated with chatbots is the possibility of grooming or exploitation. Users’ ages, genders, and locations can all be gathered through programming for chatbots. Although this data is typically utilized to deliver individualized solutions, it may potentially be exploited for more sinister ends. Chatbots could be used by groomers or predators to get sensitive information about children, making it simpler for them to target and take advantage of them. Children may also be more prone to these attacks since they may trust chatbots more than a real person.

Finally, chatbots might cause social exclusion or stunted social growth in kids. Although chatbots are made to look like real people, they cannot replace the value of face-to-face interaction. Children who rely too much on chatbots may miss out on developing important social skills like empathy, communication, and emotional control. Additionally, they could find it difficult to build lasting relationships with others, which might make them feel lonely and isolated.

To mitigate these dangers, it is crucial to closely monitor children’s use of chatbots. Parents and responsible adults should review the content and appropriateness of media consumed by children. Moreover, it is important to educate children about the data chatbots are collecting and help them to understand the significance of limiting personal information they share. Providing children with opportunities for meaningful human interaction, whether through playdates, family time, or other social activities, is also paramount in fostering their social growth and well-being.

By harnessing recent advancements in its field, the League chatbot integrates a text semantic similarity approach and a deep learning (DL) chitchat classifier. When confronted with a query from a child, the League chatbot employs semantic similarity calculations, drawing upon a collection of potential questions and corresponding answers stored in a Knowledge Base (KB). This chatbot was specifically designed for children and trained on a dataset curated to be child-friendly. By identifying the most closely related question and answer, the chatbot provides an appropriate response to the child. Furthermore, the chitchat DL classifier underwent training to handle various everyday topics, resulting in a more natural and fluent conversational experience.

Skip to content