The tech sector is in government crosshairs again, with the focus on child safety and what tech companies are doing – or not – to ensure children are protected.
That big forces are moving against the tech industry is nothing new. The backlash against the sector has been ongoing for several years, from concerns over social media to privacy and advertising fears.
But things have picked up pace in recent weeks. In the US, senators are moving against big tech, pushing for increased child safety online and more responsibility to land at the feet of the companies and platforms.
US president Joe Biden used his state of the union address in February to take aim at tech companies, calling for bipartisan support to ban targeted online ads and to bring in stronger protections for children and young people to safeguard their privacy, safety and health online.
In the UK, the upcoming Online Safety Bill is expected to hold platforms to account, reducing child exposure to harmful content and forcing tech companies to alter their algorithms with online safety in mind. The EU has also been working to shore up child safety online in recent years, keeping the attention on the topic and the spotlight on the responsibilities borne by the platforms that are making billions of euro a year.
The message is clear: if the companies won’t voluntarily act in the best interests of younger users as the authorities see it, then they will be compelled to do so by law.
[ The Irish Times view on internet safety for childrenOpens in new window ]
For Mindy Brooks of Google, online safety for children is at the heart of everything her department does. As senior director and general manager for kids and families, she focuses on designing technology that meets the needs of families. Her team also helps other Google teams to understand developmental principles for families that underpin product design.
Brooks has a degree in psychology and a master’s in applied developmental and educational psychology from New York University.
“I‘m a researcher by trade in a child development background. I’ve been able to come into the tech space and think about it through the ages and stages, and how you think about that and understand the needs of parents and kids around the world,” she says. “That’s been one of the most fulfilling things when I first started at Google – trying to set the stage on how – what are the right protections we need, how do we build parental controls such as Family Link, and make sure it’s accessible to all family types? It’s very fulfilling thing to work on, because you’re listening to users all over the world to understand how do we do this, and in a way that enables a parent in Kansas versus a parent here in Ireland and make sure that we are thinking about that really broadly.”
Brooks began her career in the media with a stint at Sesame Workshop, where she worked as a researcher and analyst before moving up to director of digital content and innovation research, just as touch-enabled devices were really hitting the mainstream.
“It was a pivotal career moment for me in which I really started to become interested in how we set standards, guidelines and practices that are based on child development ... making sure we’re creating great experiences,” she says. “I truly believe you can get great things out of technology and great things out of television, if done right. And I think Sesame Street prove that for television.”
A five-year-old is totally different from a 15-year-old, and different from a 17-year-old. We need to make sure we’re thinking about our protection holistically. That’s something my team is very committed to
— Mindy Brooks
Brooks had her chance to prove that for technology when she moved to Netflix in 2014, and then to Google as a researcher.
But child safety online is a tough task. The global nature of Google’s products means that there are different laws and regulations to navigate in different territories. However, the company has put in place a number of standard child safety protection measures: SafeSearch on by default for accounts for children under the age of 18, personalisation is off and mature content is gated on YouTube.
On the Play Store, a separate “Kids” tab recommends apps that have been reviewed and approved by teachers, split into age groups. To get into that section, the content – including any ads – has to align with the rules of Google’s Family programme, something the company says it takes seriously.
“All of those things are ways in which we want to make sure we’re protecting kids and thinking about where can we put in place the right baseline protections for children as well as understanding where there are regional differences,” says Brooks.
There are other factors to consider too. “A five-year-old is totally different from a 15-year-old, and different from a 17-year-old. We need to make sure we’re thinking about our protection holistically. That’s something my team is very committed to.”
[ Keeping kids safe online is mostly common sense and communicationOpens in new window ]
As technology has advanced over the years, so too have the challenges facing families in how they navigate online safety. But the underlying principles are the same, says Brooks.
“Ages and stages don’t change from a developmental psychology perspective. I think this influence of technology and media definitely comes earlier than ever before, so parents are dealing with things much earlier,” she says. “One thing we think a lot about and something that comes from my days at Sesame Street is, how do you take a platform or medium and use it for good? How do we make sure that kids are getting great content on their devices? How do we make sure that the experiences are built in ways that they can easily navigate? Those are things that we take into consideration knowing that kids are getting devices earlier than ever before, and that those challenges come much earlier, as well as making sure parents have resources so that they can understand how you navigate it. A lot of the time kids are more savvy at it; they’re digital natives.”
Last week Brooks was in Dublin last week for an online safety summit at Google’s Irish headquarters, bringing industry leaders, regulators and NGOs together to talk about keeping children safe online.
From Google’s perspective, the company is trying to help people understand its approach to children and family, and what it has been doing in the past couple of decades, while also learning from the experts in the room. “It’s about how we build to protect, respect and empower kids,” says Brooks. “Thinking about that deeply but also inviting conversation and collaboration and feedback, so that we can continue to improve and learn from those in the room is our ultimate goal.”
Ages and stages don’t change from a developmental psychology perspective. I think this influence of technology and media definitely comes earlier than ever before, so parents are dealing with things much earlier
The top priority, she says, it to make sure that the company is learning and collaborating with experts. “The other thing is wanting to make sure that we are sharing what our focus areas are and also getting feedback on that as well.”
Google can’t step back from its responsibilities on this one. The company owns Android, one of the two big smartphone platforms. Its vast search network is so ubiquitous, it has become a verb. Through YouTube, it is exposing people to user-generated content that can at times be controversial. And in 2019 it was fined $170 million for allegedly violating child privacy legislation in the US through the online video site. It needs to tread carefully or risk the wrath of regulators.
“Everyone recognises kids are getting devices earlier than ever before. They have access so early. And so I think everyone is focused on how do we make sure we’re keeping kids safe and have intentional conversations around some of these areas,” she explained. “We know we need to work together to make sure we are building a better internet and world for kids.”
[ Parenting tweens: What exactly should our children be allowed to do?Opens in new window ]
Google has a number of tools to help it meet those goals. In 2017 it released Family Link, an app that would allow parents to monitor and control their child’s digital activity. That app has recently been overhauled to make it easier for parents to manage, along with new features such as the ability to see a device’s battery life through the app, or get notifications linked to location tracking. Your child arriving home, for example, would trigger an alert so you know where they are when you need to.
The company has backed a number of programmes aimed at increasing safety online. Last week Google said it would commit an additional €5 million in philanthropic funding this year to specialised NGOs that are using the Be Internet Awesome curriculum to improve media literacy and online safety training across northern, central and eastern Europe.
But there are new challenges coming down the line. The race to create the next version of the internet through the metaverse, and the artificial intelligence arms race could have unintended consequences for a generation of younger internet users. And the current perception of tech companies could mean that people are less likely to trust established tech companies with this brave new world.
“We’re committed to using AI responsibly, and we’ve been showing that for years in what we’ve been doing. How do we make sure that we are using this responsibly, as well as keeping kids safe? It’s a top commitment for us, as we think deeply about this space,” says Brooks. “We’re really focused on making sure we’re researching, understanding it and thinking about the responsibility and the policies we need to put in place for users in general as well as kids. From a child development perspective, we’re focused on making sure we’re building holistically across Google products that understand the ages and stages of children, and building in the right protections.”