The Privacy Divide

4 min read

One of the questions that arises in privacy work is if or how privacy rights - or access to those rights - play out across economic lines. The answer is complicated, and this post is a messy step into what will almost certainly be an ongoing series of posts on this topic. Both inside and outside education, we often talk about issues related to the digital divide, but we don't often look at a companion issue, the privacy divide.

This post is not intended to be exhaustive, by any means - and please, for people reading this, I'd love to read any resources that would be relevant and help expand the conversation.

There are a range of ways to dig into the conversation within EdTech, but one way to start to look at the issue is to examine how parents are informed of their rights under FERPA. This is an area where more work needs to be done, but even a superficial scan suggests that an awareness of FERPA rights is not evenly distributed.

Leaving FERPA aside, it's worth looking at how content filtering plays out within schools. The quotes that follow are from a post about Securly, but it's broadly applicable to any environment that defaults to filtering.

"From the Securly dashboard, the administrators can see what students have and haven’t been able to access," she explains. "If I want to see what kids are posting on Twitter or Facebook, I can--everything on our Chromebooks gets logged by Securly."

However, for students whose only access is via a school-issued machine, the level of surveillance becomes more pervasive.

"Most of our students are economically disadvantaged, and use our device as their only device," DeLapo explains. "Students take Chromebooks home, and the Securly filters continue there."

This raises some additional questions. Who is more likely to have their activities tracked via social media monitoring? If something gets flagged, who is more likely to have the results passed to law enforcement, rather than a school official?

 These patterns follow the general trends of disproportionate suspension based on race.

What zip codes are more likely to receive the additional scrutiny of predictive policing?

Throughout these conversations, we need to remain aware that the systems in use currently are designed to spot problems. The absence of a problem - or more generally, the lower probability that a problem will eventually exist - creates a lens focused on a spectrum of deficits. The absence of a problem is not the same as something good, and when we used tools explicitly designed to identify and predict problems, they will "work" as designed. In the process of working, of course, they generate more data that will be used as the justification or rationale for future predictions and judgments.

Increasing access and elminating the digital divide need to happen, but people can be given access to different versions of the internet, or access via chokepoints that behave differently. We need look no further than the stunted vision of internet.org or the efforts of major industry players to detroy net neutrality to see how these visions play out.

To be more concrete about this, we can look at how AT&T is charging extra for the right to opt out of some (but not all) ad scanning on some of its Fiber internet access offerings. Julia Angwin has documented the cost - in cash and time - she spent over a year to protect her privacy.

Taking a step to the side, current examples of how data is used show how data analysis fuels bias - from using phone habits to judge lenders, to digital redlining based on online habits, to using data to discriminate in lending.

The digital divide is real, and the need to eliminate it is real. But as move to correct this issue, we need to be very cognizant that not all access is created equal. We can't close the digital divide while opening the privacy divide - this approach would both exacerbate and extend existing issues far into the future.