Blog

AI, Big Data and kids: The potential and threat posed by biometrics

By Susan Raab, CDP Institute

Artificial intelligence is increasingly part of sales, marketing, customer service and production. It is also part of product development, including in the children’s book and toy industries and, according to a 2020 report from the Emotional AI Lab, is an area expected to grow throughout the decade. Recent news about integrating story and play using AI, includes Mattel’s announcement that it is collaborating with Bookful, the augmented reality (AR) book app, to produce AR-activated books for characters beginning with Barbie® and Thomas the Tank Engine®.

Educational publishers have led the way in digitized learning and now earn substantial revenue from sales of digital texts and from data they collect. Marketers know this data also can provide insights into what a family may purchase, but it is important to take care when pursuing this area for a number of reasons, not the least of which is because of complex regulation with regards to children’s data.

Collecting data on young users and developing technology designed to interact with a child on an emotional level is also a very tricky business and it’s important to consider implications of what various aspects of this may mean. MIT Technology Review said, children “are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt.” UNICEF with similar concern has launched the Artificial Intelligence for Children policy to explore how to protect children’s rights in this area. The authors of the Emotional AI Lab report looked at how AI could have the potential to influence kids’ behavior and project that this could add “a commercial dimension” into the child-parent dynamic.

There are a good number of products on the market now that can respond to what we say and do, but now with more biometric data technology, far more can be captured and analyzed. This can include cameras to capture our expressions; embedded microphones to record the pitch and timbre of our voice; sensors to detect finger movement on a keyboard or screen, and monitors to track our pulse, blinks, and breathing. This biometric data can then be analyzed to ascertain how a person feels while interacting with a given product or experience, and the data can be algorithmically analyzed for predictive modeling to try to anticipate how that person might behave in the future. This is what the Emotional AI Lab terms, “Emotional AI,” and which they define as, “technologies that use affective computing and artificial intelligence techniques to sense, learn about and interact with human emotional life.”

This raises obvious ethical issues when applied to children because, as the Emotional AI Lab report research says, children are vulnerable, “their privacy rights are not well protected,” and “existing legal instruments” like the General Data Protection Regulation (GDPR) and the Children’s Online Privacy Protection Act (COPPA), “do not do enough to ensure their data will be kept safe.” Further the authors say, research shows, “children are, depending on their age, unable to understand persuasive intent or separate ads from content,” so, “marketing to children and parents based on a child’s negative emotions [for example, if their parent chooses not to get a new product that’s been presented to them as desirable] is deeply unethical and harmful.”

The authors believe this could easily lead to adding a new “commercial dimension into parenting,” in three ways: 1) marketing to parents when they themselves are vulnerable and may be more receptive to something that seems to satisfy their child, 2) parents may be largely unaware of the type of data being collected on their children, so would not take efforts to stop what they might otherwise find unacceptable, and 3) the possibility that some companies might make a two-pronged marketing effort in which on one hand they do outreach to a child to try to heighten the child’s emotions about wanting a product; while at the same time targeting the parent with the notion that the company is offering a product that can make their child happier.

For marketers and corporations, it is important to understand where this aspect of business could be headed to ensure ethical practice and safety for children, who cannot easily advocate for their own welfare, even if they were made to understand the importance of doing so. Further, this is an extremely sensitive area that could have significant social and legal consequences for companies especially as laws evolve, which they should do in this area.

More information is available at:

-UNICEF’s Policy Guidance on AI for Children

-The UK’s Information Commissioner’s Office issued a Code of Practice this September, which sets out 15 standards for with children online.

-The U.S. website on the Children’s Online Privacy Protection (COPPA) Rule

Post a comment