A recent report from Human Rights Watch found that the personal details and images of over 170 Brazilian children have been used without consent to train AI systems. Here’s what parents need to know and how they can protect their children’s privacy. But first let’s try to understand what has happened.
What Happened?
- Unauthorized Use of Images: Photos and personal information of children have been taken without permission and included in a dataset to train AI.
- Sources and Timeline: These images, some as recent as 2023 and others dating back to the mid-1990s, were scraped from personal blogs, parenting sites, and low-view YouTube videos.
Why parents should care:
Hye Jung Han, a children’s rights and technology researcher, emphasizes the risk: “Children’s privacy is violated when their photos are scraped and used to train AI. This technology can then create realistic imagery of children, which can be manipulated by malicious actors.”
What is happening now:
- Data Removal: LAION, the organization behind the dataset, has removed flagged images and is working with various agencies to eliminate all references to illegal content.
- YouTube’s Policy: YouTube prohibits unauthorized scraping of content and is taking action against such violations.
Why Parents Should Be Concerned
This issue highlights the potential dangers of sharing personal content online. AI-generated content, such as deepfakes, can have serious implications, including privacy violations and bullying.
Facts:
- Reports of child abuse images online increased by almost 50% during lockdown periods (https://crystaline.uk/huge-rise-in-reports-of-online-child-abuse-images/)
- Nearly 7% of people who posted photos of children online reported receiving requests for child abuse material. (https://www.theguardian.com/media/2024/may/02/parents-share-photo-kids-online-identity-aic-report-sharenting)
- Reports to the National Center for Missing & Exploited Children (NCMEC) rose by more than 12% in 2023, surpassing 36.2 million reports
- The Internet Watch Foundation (IWF) identified over 100,000 webpages with self-generated CSAM featuring children under 10, a 66% increase from the previous year.
- Snapchat was involved in 44% of instances where the online platform was identified by police in child abuse image offences.
- Meta-owned platforms (Instagram, Facebook, WhatsApp) were used in a quarter of these offences.
- Facebook reported over 17.8 million incidents of people sharing child sexual abuse imagery on Messenger in 2022
Tips for Parents to Protect Their Children’s Privacy
- Limit Online Sharing: Think twice before posting photos and videos of your children online. Even seemingly harmless posts can be misused.
- Use Privacy Settings: Make sure your social media accounts and other online platforms are set to private. Only share content with trusted friends and family.
- Educate Your Children: Teach your kids about the importance of privacy and the potential risks of sharing personal information online.
- Stay Informed: Keep up-to-date with the latest developments in online privacy and data protection to better safeguard your family’s information.
- Monitor Online Presence: Regularly check what information and images of your children are available online. Use tools like reverse image search to find where their photos might be used.
- Report Misuse: If you find your child’s images or personal information being used without consent, report it to the relevant platform or authority immediately.
Where to get help from:
If you are in the UK you can get help from the organisations below:
- NSPCC Helpline: 0808 800 5000
- CEOP (Child Exploitation and Online Protection Command): Report online grooming and abuse
Internet Watch Foundation: Report indecent images of children online - Internet Watch Foundation: Report indecent images of children online
Read the original article : https://www.wired.com/story/ai-tools-are-secretly-training-on-real-childrens-faces/
Was this helpful?
Good job! Please give your positive feedback
How could we improve this post? Please Help us.