Observable Patterns in Conversations about Ilhan Omar

26 min read

A. Summary

This data analysis looks at conversations on Twitter about Ilhan Omar that occurred in June, July, and August. Specifically, this analysis examines 4 spikes in conversation on Twitter, and looks at the accounts partipating in each spike, and the top domains and YouTube videos shared by participants in the conversation. Collectively, these 4 spikes in conversation make up 1.19 million tweets.

Data used for this analysis was collected from a twitter search for Congressperson Ilhan Omar's name: "Ilhan Omar." The search did not use any other hashtags or terms.

The analysis shows several trends:

  • Shares to YouTube dwarf shares to other domains. In each spike, YouTube was the most popular external domain shared - collectively, in all four spikes, at least 10,756 YouTube links were shared.
  • Aside from YouTube, the right wing site Gateway Pundit is the most popular domain shared. In three out of the four spikes, Gateway Pundit was the most popular site shared. In the fourth spike, Gateway Pundit was the second most popular site shared.
  • 260 accounts were highly active in all 4 spikes (in the 95th percentile or greater as measured by post count). 246 accounts (94.6%) are right leaning to far right, compared to 10 accounts (3.9%) that were mainstream to left wing.
  • The most popular YouTube shares in each spike trended hard right. Out of the top 16 YouTube videos shared, only one was from a left leaning source (Now This News); the remaining fifteen were from right wing sources, including some sources known for sharing exremist content and misinformation. Additionally, YouTube's recommended videos reinforced right leaning to far right perspectives, so once a person landed on YouTube the video recommendations would keep them firmly rooted in a right wing perspective, or an extremist/white supremacist perspective.

The four spikes in conversation that occurred in the summer of 2019 show multiple ways in which the right and the far right dominated the conversation about Ilhan Omar on Twitter, and how that imbalance extended onto YouTube.

This analysis does not look at corresponding activity on Facebook, and this analysis does not look extensively at whether or not any accounts are engaging in coordinated misinformation efforts.

B. Introduction

In this analysis, we will look at 4 spikes in conversation that have occurred about Ilhan Omar in June, July, and August. This analysis uses Twitter as a starting point, and also examines YouTube shares. Each individual spike is described in more detail below.

This analysis focuses on three things for each spike:

  • levels of participation among the most active accounts;
  • domains shared within the data set, and
  • top YouTube videos shared.

These general, and distinct, indicators help provide an initial sense of the source material used to inform the conversation.

At the end of the analysis of the four spikes, I also examine the apparent ideological leanings of the accounts that were highly active (in the 95th percentile or greater) across all four spikes.

In this analysis, I will generally not be identifying individual accounts for two main reasons:

  1. precise attribution is difficult; while some accounts within this dataset clearly appear to be inauthentic, I prefer to err on the side of caution. If/when an authentic account is incorrectly labelled as inauthentic, it can direct destructive attention toward that account. The short version: I'm personally not okay with doxing.
  2. issues related to misinformation go beyond individual accounts. Patterns are interesting, and individual accounts are rarely of interest in their own right, but they are of greater interest when they can be situated within a pattern.

In rare cases, if or when an individual account does help illustrate a larger point, I reserve the right to use an individual post, but this will be rare, and generally only when the account in question is verified, and/or belongs to a public figure, and/or has been active in spreading misinformation. However, these instances will be rare, and in most cases if or when I use an individual post as an example it will be stripped of as many non-relevant details as possible.

C. Questions Asked and General Notes

This section provides context and some general notes on methodology used in the analysis. If you want to read this later and skip straight to the analysis of the spikes, head right this way!

C1. Who/what is creating the buzz?

To help get a rough sense of how participation in this conversation unfolds, I calculate what percentage of accounts participating in the conversation create 10% of all posts in the spike. This number is a rough proxy for how top-heavy a conversation might be: in a balanced conversation, 10% of participants would create 10% of the conversation.

It cannot be emphasized enough that these numbers are a very rough proxy for how engaged the most engaged participants are, and these numbers are best understood as indicators of other things to look for, rather than as meaningful in their own right. On Twitter - as in life - conversations can be dominated by very loud or active participants. The gap between the percent of participants and percent of the overall conversation can be an interesting indicator. When the percentage of participants edges closer to 10%, it can suggest more balanced participation across accounts. When percentage of participants is smaller, it can indicate a more frenzied conversation, higher participation by spambots (on or off topic), or other forms of artificial manipulation.

However, to re-emphasize this point: these numbers should only be understood as potential indicators. Additionally, the search terms and filters used to generate a data set can affect what these numbers look like, which makes it difficult to use these numbers to make apples to apples comparisons across data sets generated from different search terms. I am including these numbers here because they provide some context, but they should be considered rough indicators, at best.

C2. What domains are shared?

The domains used as sources within a conversation can provide a rough indication of the perspectives and ideological leanings of participants. Collecting the list of domains is the easy part. Coding those domains on a scale that measures (or approximates a measure of) ideological leaning is more difficult, and generally satisfies no one. However, it's a necessary element of the work, and I am attempting to be as clear and transparent as possible about how domains are coded.

For this analysis, I created two general groups using the spectrum of political right to political left. At the outset, I want to be clear that this definition is an oversimplification. However, for the purposes of this analysis, the oversimplification embedded in this coding is both a strength and a weakness - while there are going to be fringe cases that don't fit cleanly within this coding, the general structure is simple to the point where it is easy to use and easy to understand.

The two general categories are:

  • mainstream to left leaning to far left;
  • and right leaning to far right.

In determining where a publication stood on the spectrum from far right to right leaning to mainstream to left leaning to far left, publications like USA Today, the AP, and Reuters are considered mainstream. Sources like CNN, the NY Times, and the Washington Post, which are generally mainstream but, in aggregate, lean left, are included in the "mainstream to left leaning to far left" group. Sites like "Mediaite" and "Raw Story" have an editorial direction that is strongly to the left; these sites also share stories and headlines designed to be clickbait, and/or to misrepresent the facts of an issue to fit a political or ideological narrative. Publications like the New York Post and Wall Street Journal, which consistently swing right, are included as mainstream sources and are coded within the "mainstream to left leaning to far left" group.

In general, for a source to be considered right leaning or far right, it needed to be to the right of the Wall Street Journal or the New York Post. Fox News (discussed in more detail below) is coded within the "right leaning to far right" group, where Fox affiliates -- who often have more balance and a degree of editorial independence -- were coded as within the mainstream group. Sites associated with known racists or far right activists were coded within right leaning and far right.

Advocacy sites were coded within the political affiliation that most closely aligned with their advocacy. I also used Media Bias Fact Check to check my coding. This writeup also contains the list of the top 50 domains shared in each spike, and that list includes my coding so it can be checked for accuracy and argued over indefinitely

The decision about where to code Fox News was surprisingly difficult. My initial tendency -- largely because of the presence of voices like Sean Hannity, Tucker Carlson, Lou Dobbs, Laura Ingraham, Jeanine Pirro, etc -- strongly indicated that Fox should be included within right leaning to far right. However, there are a small number of journalists in their news unit (looking at you, Shep Smith) who, while definitely leaning right, have committed acts of actual journalism.

However, this question was simplified by how Fox shares its content on YouTube. On YouTube, Fox shares it's opinion hosts -- many of whom share biased, racist, misogynistic content, and/or blatant conspiracy theories -- under the "Fox News" name.

Fox News opinion hosts

This clear connection of the news side and opinion side on their YouTube presence -- which has over 3 million subscribers, and millions of views on its videos -- simplifies the decision, and was the deciding factor in grouping Fox News into the "right leaning to far right" group.

The coding in this analysis should be understood as a rough grouping of political leaning, and this coding stops short of determining whether or not a site shares false, misleading, or inaccurate stories. In some cases, if a site has a clear track record of spreading misinformation, that is noted in the analysis.

C3. What does it mean when google.com shows up in a domain list?

The domain "google.com" shows up in the listing of top domains; this is generally an indication of a hamfisted and amateurish setup of Google's "accelerated mobile pages" - more information available here.

The National Review provides a great example of this incompetence in action: https://www.google.com/amp/s/www.nationalreview.com/news/link-to-misinformation/amp. In this example (with the full url changed so as not to provide more visibility to any stories), you can see that google (dot) com shows up as the primary domain. This is a common trait among both less reputable sites, and reputable sites with sub-par technical implementation: because the main domain shows up as "google (dot) com" the site will generally show up as "trustworthy" regardless of whether or not the site is reliable. This is one of several ways that AMP is not good.

C4. What does it mean when twitter.com shows up in the domain list?

Links to twitter.com indicate that people are sharing and amplifying individual tweets, which can be indicative of echo chambers and/or highlighting accounts to swarm. Additional analysis of accounts sharing links to other Twitter URLs is required to gauge whether or not there is any level of artificial or coordinated signal boosting among these accounts.

C5. Analysis of YouTube Shares

For each spike, I examine the top 4 YouTube videos shared. This analysis looks at:

  • the number of shares on Twitter to the video
  • the source of the video
  • number of plays for the video.

For the first and fourth most popular videos in each spike, the analysis includes a breakdown of the recommended videos in the sidebar, up to a maximum of 9.

D. Spike One: June 22nd through June 29

This initial spike coincided with congressional delegations visiting border camps holding people seeking asylum in the US. Multiple congresspeople compared the dehumanizing conditions of these camps to concentration camps, and the story was gaining increased visibility via press coverage. Additionally, on June 20th, the NY Times put out a story profiling some of the people having racist reactions to Somali refugees resettling in Minnesota.

The subject of this spike was misinformation about Ilhan Omar's past.

Spike 1 posts

Between June 22nd and June 29th, 175,260 tweets came from 80,365 accounts.

The top 544 most active accounts - .68% of all active accounts in this spike - created 10% of all content in this spike.

Across all accounts, approximately 1500 unique domains were shared a total of 21,314 times.

A scan through the top 20 domains shared over this time period show a strong skewing toward right wing sites, with multiple sites of conspiracy theorists and far right figures appearing above mainstream sites. In the top 20 sites, 3828 shares point to 13 different right leaning or far right domains. 899 shares point to mainstream or left leaning content -- and all of those shares are from one source, the Star Tribune, the local paper in Ilhan Omar's district. Six of the domains in the top 20 were either link sharing services or links to other social media sites like Facebook or YouTube.

Out of the top 50 sites, 28 domains were right leaning or far right; these 28 domains were shared 4732 times. 8 domains were mainstream to left leaning to far left, and these domains were shared 1297 times. When we look at individual examples, fringe conspiracy sites and outright hate sites were shared in greater numbers than mainstream news sites. For example, links to Pam Geller's site were shared 184 times; Laura Loomer's site was shared 147 times; Infowars was shared 75 times; and the New York Times was shared 56 times.

Links to YouTube videos dwarf shares to other domains, with 1776 shares to YouTube. The next most popular domain after YouTube is Gateway Pundit, with 1328 domain shares. The Star Tribune - a local paper in Minnesota which is considered both mainstream and reliable - was shared 899 times.

The full list of the top 50 domains is included below.

The top 4 YouTube shares are listed below. The top 3 videos - shared collectively 406 times during this spike - all point to right leaning to far right content. The 4th most shared video - shared 91 times during this spike - is from Now This News, a progressive organization.

A look at the YouTube pages for these videos, however, suggests that the sharing of the link from Twitter is just the beginning. The screenshot shared below of the Rebel Media video was taken on August 20th from a clean browser while not logged in to YouTube. The 9 recommended videos at the top of the list include:

Spike 1 - video 1

  • 4 links to Fox News
  • 1 link to CNN
  • 1 link to Piers Morgan
  • 1 link to Channel 4 News
  • 1 link to Star Parker
  • 1 links to Vice "debate"

If a person comes to this video, the main options presented to them slant heavily to right leaning to far right perspectives.

Looking at the only progressive video in the top 4 most shared -- which was the 4th most popular video shared -- the breakdown of the 9 top recommended videos on the "Now This News" video include:

Spike 1 - Video 4

  • 3 links to Fox News
  • 2 links to MSNBC
  • 1 link to a Bill Maher interview with Ben Shapiro
  • 1 link to CNN
  • 1 link to C-SPAN
  • 1 link to The Daily Show

The YouTube recommendations on these videos include a small number of mainstream to left leaning sources, but the majority of recommendations are to right wing sources.

E. Spike Two: July 9th to July 13th

This spike appears to be sparked by a Tucker Carlson segment where Carlson continued his pattern of using racist smears as a core element of his program.

Spike 2 posts

In this spike, 90,788 accounts posted 209,404 times over 5 days (July 9-13).

The top 703 most active accounts - .77% of all active accounts in this spike - created 10% of all content in this spike.

In this time period, approximately 1450 domains were shared 20,702 times.

Out of the top 20 domains shared, 3238 shares pointed to 11 different right leaning or far right domains. 790 shares pointed to 4 different mainstream or left leaning domains.

Out of the top 50 domains shared, 3803 shares pointed to 19 different right leaning to far right domains. 1604 shares pointed to 16 mainstream to left leaning to far left domains.

As with the first spike, links to Twitter and YouTube dominated shares, with 7079 and 1391 shares, respectively. Fox News, The Gateway Pundit, and Breitbart were the next most popular domains, collectively shared 2062 times. In comparison, the most popular mainstream to left leaning domains (Huffington Post, Mediaite, and Microsoft News) were shared a total of 625 times.

The Western Journal - a far right site run by a political activist who was responsible for the Willie Horton ad and who currently runs a PAC with Herman Cain - was shared 198 times. In comparison, the Washington Post was shared 165 times.

The most popular YouTube videos slant heavily toward right leaning and far right sources as well. The top four YouTube shares all point to videos that represent right wing perspectives.

The most shared video - from the Next News Network - has recommended videos that are almost exclusively right wing. The recommended videos include:

Spike 2 - Most shared on YouTube

  • 6 from Fox News
  • 1 from NBC News
  • 1 from "Valuetainment"
  • 1 from "Pure living for life"

The fourth most popular video - from an account named "Contemptor" - follows the same pattern. Recommended videos include:

Spike 2 - 4th on YouTube

  • 5 to Fox News
  • 1 to a Bill Maher interview with Ben Shapiro
  • 1 to a video of Ann Coulter calling feminists "angry man-hating lesbians"
  • 1 to CNN
  • 1 to CBS News

The second spike has very similar patterns to the first spike: the share of domains is heavily slanted to right leaning and far right content. Top YouTube shares are nearly exclusively to right leaning or far right content, and the recommended videos from the top shares are heavily weighted to right leaning or far right sources.

F. Spike Three: July 13th to July 18th

The third spike picks up where the second spike ends, and includes the time period that includes Trump telling Ilhan Omar and three other congressional representatives to go back to "the totally broken and crime infested places from which they came."

Trump comments

While both the second and the third spike include parts of the 13th, the second spike ends at 03:00 on the 13th, and the 3rd spike picks up at 04:00.

Spike 3 posts

In the time period between July 13th and July 18th, 232,293 accounts posted 622,855 tweets over 6 days.

The top 1605 most active accounts - .69% of all active accounts in this spike - created 10% of all content in this spike.

In this time period, approximately 3450 domains were shared 81,815 times.

Out of the top 20 domains shared, 8325 shares link to 7 right leaning or far right sources. 5401 shares link to 6 different mainstream or left leaning or far left domains.

Out of the top 50, 12,294 posts linked to 21 right leaning or far right domains. 7995 shares linked to 16 mainstream to left leaning to far left domains. In this third spike, shares to right wing sources still dominate shares to left wing or mainstream sources. The Gateway Pundit - a far right site that regularly spreads misinformation - was shared 4780 times; this is more than the combined total of the top 4 most shared mainstream to left leaning to far left sites (Huffington Post, the Star Tribune, Wall Street Journal, and The Guardian), which were shared a total of 4400 times.

Shares of YouTube videos continue the right leaning to far right domination seen in the first two spikes.

In the third spike, 5882 total posts share links to YouTube, and the top 4 videos shared all represent right wing viewpoints.

The recommendations from The Blaze link almost exclusively to right leaning or far right sources:

Spike 3 - The Blaze

  • 6 from Fox News
  • 1 from Vice
  • 1 from Black Pill
  • 1 from Glenn Beck

The ads and recommendations from The Next News Network video point primarily to right wing content. For this video, an ad cut one video out from the top screen, so we only have eight video recommendations.

Spike 3 Next News Network

  • 6 from Fox News
  • 1 from "enduringcharm"
  • 1 from Vice

G. Spike 4: August 15 - August 17

This spike was triggered by Israel refusing entry to Ilahn Omar and Rashida Tlaib, and President Trump's two tweets supporting a foreign nation over two elected congresspeople.

Spike 4 posts

In the time period between August 15th and August 17th, 93,844 accounts posted 188,656 tweets over 3 days.

The top 826 most active accounts - or .88% - created 10% of all content

In the fourth spike, approximately 2050 domains were shared 32,780 times.

Out of the top 20 domains shared, 3514 shares link to 6 right leaning or far right sources. 2370 shares link to 5 different mainstream or left leaning or far left domains.

Out of the top 50 domains shared, 4864 posts linked to 17 right leaning or far right domains. 4201 shares linked to 19 mainstream to left leaning to far left domains. In this fourth spike, the count of total shares to right wing sources still dominate shares to left wing or mainstream sources - despite that in the top 50 domain shares there are 2 more mainstream to left leaning domains.

The fourth spike follows the patterns of the first three spikes, with links to right wing domains publishing dubious or outright racist and/or extreme content being shared at a higher volume than links to mainstream or left leaning or far left content. The Gateway Pundit was shared 1077 times, more than twice the total of shares to the NY Times, the most shared mainstream to left leaning site, which was shared 517 times. The Western Journal was shared 190 times, and links to Laura Loomer's site were shared 154 times; links to the Washington post were shared 151 times.

In the fourth spike, 1707 posts shared links to YouTube videos. As with the other spikes, the most popular videos all featured right wing content, including content from sources known to push misinformation.

In looking at the videos and links shared on the first screen with the top shared video from Black Pill, we have 6 options - two ads, and four recommended videos. The two ads are to Epoch Times and Judicial Watch. Epoch Times has recently been engaged in highly suspect and misleading behavior on Facebook, and Judicial Watch is a a far right source of conspiracy theories.

Spike 4 - YouTube video 1

The other video recommendations include:

  • 2 for Fox News
  • 1 for Black Pill
  • 1 for PragerU

The 4th most shared video - also to Black Pill - includes 8 links on the top screen; 7 videos and one ad.

Spike 4 - YouTube video number 4

The ad is for the National Republican Congressional Committee.

The video recommendations include:

  • 4 for Fox News
  • 1 for Fox Business
  • 1 for the Daily Signal
  • 1 for Huckabee

As with the other spikes, the top shared videos are right leaning to far right, and the recommended videos from YouTube are nearly all right leaning to far right.

H. Who Shows Up?

As noted in the summary of each spike, a small percentage of accounts creates an outsize percentage of the content. This isn't necessarily abnormal, but over time, noting what accounts show up most frequently can also help illustrate patterns. For each spike, I collected the accounts that were were in the 95th percentile or higher as measured by post count. Then, I looked at what accounts were in the 95th percentile of activity across all four spikes.

260 accounts total were active across all four spikes covered in this analysis. Out of these 260 accounts:

  • 246 accounts (94.6%) are right leaning to far right.
  • 10 accounts (3.9%) are mainstream to left leaning to far left.
  • 4 accounts (1.5%) were not clearly affiliated. These accounts were on a spectrum between overt gibberish and failed attempts at parody/joke accounts.

Coding of account leanings examined general traits of the accounts, including bios, recent posting histories, hashtags used, domains shared, and posts liked or retweeted. The following tweets provide samples from accounts that were coded as right leaning or far right:

Right wing Twitter example 2

Right wing Twitter example 1

Rightwing Twitter example 3

Two examples of left leaning accounts that were active across all four spikes are Ilhan Omar and new outlet The Hill.

Among the most active repeat participants, right and far right accounts vastly outnumbered mainstream and left leaning accounts. This analysis does not make any effort to determine whether or not these accounts are connected to real people, or whether or not these accounts are part of inorganic or inauthentic amplification as part of a larger network. While many of these accounts do show signs of being trolls and/or sockpuppets, more detailed analysis is required to determine potential authenticity or inauthenticity of individual accounts.

The overwhelming numbers of active participants from the right, relative to the much smaller number of participants from the mainstream and the left, indicates that on Twitter, right wing accounts show up more consistently. The fact that just under 95% of active repeat participants in these spikes are right leaning to far right, with just under 4% being left leaning or far left, helps highlight that in the conversations about Ilhan Omar, the right wing and far right voices are significantly more consistent and active than left leaning voices. This imbalance calls out for additional research on these accounts to determine how many can be connected to actual people, and how many are potentially sockpuppets working within a network.

I. Conclusion

When looking at what domains get shared, and at the most popular shares of YouTube videos, two facts become clear about the recent conversations about Ilhan Omar:

  • Content from right leaning to far right domains is shared at a much higher volume than mainstream, left leaning or far-left domains.
  • On YouTube, right leaning to far right content is initially amplified by disproportionate sharing from Twitter, and visitors to YouTube are subsequently served more right leaning to far right content via YouTube's content recommendation algorithm.

Individually, either of these elements indicate that right wing perspectives are overwhelming the conversation about an elected official. Taken together, however, these two factors are mutually supportive - this is how a closed system on seemingly "open" platforms take shape.

When the imbalance in domain shares, the imbalance in shares to YouTube videos, and the rabbit hole effect of YouTube's content recommendation algorithm are combined, we get a clearer sense of how social media platforms can potentially be gamed in parallel to fabricate consensus, and to support the spread of increasingly radical and hateful content. The imbalance in one conversation (or spike) both shifts the bounds of what's "normal" and then the next conversation shifts the norms even further.

This imbalance is further multiplied by the repeated rates of participation from right wing accounts. These accounts draw on older content generated from past flareups, creating a system that supports bias or misinformation in depth. Multiple content distribution strategies are at play here, and the whole is absolutely greater than the sum of its parts.

Over time, the right leaning to far right content creates an ever-growing foundation of sources that it can use to buttress arguments in future conversations. This ever-growing body of content provides a repository that reinforces a world view and a perspective. Conversations about specific issues become less about the individual issue, and more about proselytizing a world view and bringing people into the fold. To make a vast oversimplification, one of the possibilities suggested by this data set is that the left argues about specific points, while the right uses specific points to proselytize a world view.

While this analysis stays away from whether or not any of the activity is coordinated or inauthentic, this analysis highlights that conservative complaints of "censorship" on social media are somewhere between flimsy to baseless. The data set used in this analysis was derived from a search on a person's name. Theoretically, the results should have been a pretty balanced. If YouTube and Twitter are attempting to be biased against conservatives, they are very bad at it. Similarly, if they are attempting to check or curb the use of their platforms as a means of spreading misinformation and extreme speech, they're not doing great there either.

I have yet to see any platform provide concrete data around the numbers of FTEs (and I'm talking full, salaried employees, not contractors) with dedicated time and clearly defined authority to shut down hate speech and misinformation. I have also never seen comparisons of staffing levels between, for example, advertising, or sales, or marketing, and teams fighting misinformation and abuse. If and when platforms ever become transparent and show us this information, we could begin to get a more concrete sense of how they prioritize the health of their platform relative to other business interests.

On August 27th, YouTube released new guidelines and renewed promises to "RAISE UP authoritative voices" and "REDUCE the spread of content that brushes right up against our policy line." However, given what is readily apparent on their platforms, the visible results of the current efforts of Twitter and Youtube - as observed in this analysis - do not appear remotely effective.

J. Top 50 Domains Shared

Spike 1

Spike 2

Spike 3

Spike 4

Quick Response

2 min read

Between July 28th and August 4th - a period of 8 days - 15 mass shootings have occurred in the United States.

In at least two of the events, fast police responses have been credited with saving lives. In the mass shooting that occurred at the Gilroy Garlic Festival, where 2 children were among the people killed, a "heavy police presence" was credited with saving lives.

Gilroy pull quote

The chief credited a heavy police presence for saving lives as chaos descended on the decades-old festival in Gilroy, a city about 30 miles south of San Jose. “We had many, many officers in the park at the time this occurred … which accounts for a very, very quick response time,” Smithee said.

In the mass shooting at Dayton, Ohio, where at least 9 people are currently reported killed, the mayor said that police "neutralized" the person with the gun in less than a minute.

Dayton pull quote

In a press conference, she said the police neutralized the shooter in less than one minute. “If Dayton Police had not gotten to the shooter in less than a minute ... hundreds of people could be dead today,” Whaley said as she praised law enforcement for their quick response. The shooter had a .223-caliber gun with high-capacity magazine and was wearing body armor, she said.

The fact that professional law enforcement saved lives is a very, very good thing - my observations in no way diminish their contribution or their service. But putting the burden on law enforcement is completely misplaced. In Dayton, law enforcement was there in under a minute - and nine people are dead, and 26 are wounded.

In Gilroy, a heavy law enforcement presence stopped the killing at three people, with 12 additional people injured.

Our measure of success can't be smaller body counts. Our measure of safety can't be an increased armed police presence everywhere.

Vice and Philip Morris Partner to Create Vaping "Documentaries" - and YouTube Provides the Assist

2 min read

Vice News partnered with Philip Morris to create pro-vaping content. More details are available in this thread, and via the Financial Times.

The "documentaries" are, of course, on YouTube. One sample documentary is called out in this thread.

Link to "documentary"

When we go to YouTube to watch the video, we can see the recommendation algorithm kick in. I navigated through these videos while using Tor, and not logged in to Google, so these recommendations would not be affected by any account history.

YouTube's recommendation algorithm leads viewers to increasingly pro-vaping content. If we follow autoplay three times (to see a total of 4 videos), by the time we hit the 4th video, one of the top recommendations is for "5 Things Every New Vaper Must Know" which is just below "Stoner Compilation 2019." I'm not linking to these videos because I don't want to give them any direct exposure, but the screenshots of the trail are included below.

Video One - the Philip Morris and Vice Media production.

First video

Video Two - top recommendation from Video One.

Second video

Video Three - top reccomentation from Video Two

Third video

Video Four - top reccomentation from Video Three

Fourth video

Video Five - the fourth recommendation from Video Four

Fifth video - vaping recommendations

Once a person clicks on the "5 Things Every New Vaper Must Know" we are firmly in the territory of vaping recommendations.

As noted above, Philip Morris and Vice teamed up to market vaping to youth. YouTube ensures that the "documentaries" produced by Vice lead people directly to more pro-vaping content.

Readings on Big Data Use and Implications

2 min read

This is a general and incomplete list. For additional recommendations, please let me know!

2006: AOL releases data on searches. https://arstechnica.com/uncategorized/2006/08/7433/

2012: Location tracking, and predicting where we will go: https://slate.com/technology/2012/08/cellphone-tracking-what-happens-when-our-smartphones-can-predict-our-every-move.html

2013: Likes are revealing. This study formed the basis of Cambridge Analytica's work. https://www.theguardian.com/technology/2013/mar/11/facebook-users-reveal-intimate-secrets

2013: Location data is a highly accurate method of identifying individuals. 2 data points can identify 50% of individuals; 4 data points identifies 95% of individuals. https://www.wired.com/2013/03/anonymous-phone-location-data/

2013: Discrimination in online ads: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208240

2014: NYC Taxi data, aka anonymization is hard. https://arstechnica.com/tech-policy/2014/06/poorly-anonymized-logs-reveal-nyc-cab-drivers-detailed-whereabouts/

2016: From ProPublica, the different data categories Facebook (and other data collection companies) collect about us. https://www.propublica.org/article/facebook-doesnt-tell-users-everything-it-really-knows-about-them

2016: Big data, risk assessments, and sentencing: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

2017: Cambridge Analytica and the 2016 election: https://motherboard.vice.com/en_us/article/mg9vvn/how-our-likes-helped-trump-win

2017: Tracking a specific person, legally, with $1000 and adtech. https://www.wired.com/story/track-location-with-mobile-ads-1000-dollars-study/

Four books that help frame uses and issues with Big Data: 

Four Things I Would Recommend for Mycroft.ai

2 min read

Mycroft.ai is an open source voice assistant. It provides functionality that compares with Alexa, Google Home, Siri, etc, but with the potential to avoid the privacy issues of these proprietary systems. Because of the privacy advantages of using an open source system, Mycroft has an opportunity to distinguish itself in ways that would be meaningful, especially within educational settings.

If I was part of the team behind Mycroft.ai, these are four things I would recommend doing as soon as possible (and possibly this work is already in progress -- as I said, I'm not part of the team).

  1. Write a blog post (and/or update documentation) that describes exactly how data are used, paying particular attention to what stays on the device and what data (if any) need to be moved off the device.
  2. Develop curriculum for using Mycroft.ai in K12 STEAM classes, especially focusing on the Raspberry Pi and Linux versions.
  3. Build skills that focus on two areas: learning the technical details required to build skills for Mycroft devices; and a series of equity and social justice resources, perhaps in partnership with Facing History and/or Teaching Tolerance. As an added benefit, the process of building these skills could form the basis of the curriculum for point 2, above.
  4. Get foundation or grant funding to supply schools doing Mycroft development with Mycroft-compatible devices

Voice activated software can be done well without creating unnecessary privacy risks. Large tech companies have a spotty track record -- at best -- of creating consistent, transparent rules about how they protect and respect the privacy of the people using their systems. Many people -- even technologists -- aren't aware of the alternatives. That's both a risk and an opportunity for open source and open hardware initiatives like Mycroft.ai.

Attending ISTE While My Country Puts Children in Prison

2 min read

In a few days, I'll be traveling to Chicago for ISTE 2018. I'm curious to listen to what people talk about, and what they don't talk about. 

Crying child separated from parents

Image credit: John Moore

If people are unwilling or unable to discuss the reality - the well documented, indisputable reality - that our government is putting children into makeshift jails, I will be curious to know why. We are living in a time when our government is attempting to call putting babies - some as young as 6 months old - into cages, alone - and calling it a "tender age" shelter. When we torment children, and then torture language to mask the torment we are causing, we have multiple problems, but we cannot and should not pretend this is normal.

If, at ISTE, people try and retreat behind the veneer of "politeness" I will observe that demands for politeness in the face of the obscenities of xenophobia, racism, anti-LGBTQ bigotry, misogyny, and/or white supremacy are not polite: demands for politeness are another form of erasure. 

I will be listening to how people attempt to position technology as neutral and apolitical in a time when the ability to retreat to the pillars of neutrality and apoliticism are clear definitions of privilege.

If we can't see and acknowledge that a government policy of hurting kids and destroying families is an educational issue, then we have problems.

And my recommendation here: if, in a conversation, you have a choice between polite or candid, choose candid. It might not go over well at the time, but long term, it's the kindest thing you can do.

Fordham CLIP Study on the Marketplace for Student Data: Thoughts and Reactions

9 min read

A new study was released today from the Fordham Center on Law and Information Policy (Fordham CLIP) on the marketplace for student data. It's a compelling read, and the opening sentence of the abstract provides a clear description of what is to follow:

Student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services.

The study includes four recommendations that help frame the conversation. I'm including them here as points of reference.

  1. The commercial marketplace for student information should not be a subterranean market. Parents, students, and the general public should be able to reasonably know (i) the identities of student data brokers, (ii) what lists and selects they are selling, and (iii) where the data for student lists and selects derives. A model like the Fair Credit Reporting Act (FCRA) should apply to compilation, sale, and use of student data once outside of schools and FERPA protections. If data brokers are selling information on students based on stereotypes, this should be transparent and subject to parental and public scrutiny.
  2. Brokers of student data should be required to follow reasonable procedures to assure maximum possible accuracy of student data. Parents and emancipated students should be able to gain access to their student data and correct inaccuracies. Student data brokers should be obligated to notify purchasers and other downstream users when previously transferred data is proven inaccurate and these data recipients should be required to correct the inaccuracy.
  3. Parents and emancipated students should be able to opt out of uses of student data for commercial purposes unrelated to education or military recruitment.
  4. When surveys are administered to students through schools, data practices should be transparent, students and families should be informed as to any commercial purposes of surveys before they are administered, and there should be compliance with other obligations under the Protection of Pupil Rights Amendment (PPRA).

The study uses a conservative methodology to identify vendors selling student data, so in practical terms, they are almost certainly under-counting the number of vendors selling student data. One of the vendors selling student data identified in the survey clearly states that they have information on students between 2 and 13:

Our detailed and exhaustive set of student e-mail database has names of students between the ages of 2 and 13.

I am including a screenshot of the page to account for any changes that happen to this page into the future.

Students between 2 and 13

The study details multiple ways that data brokers actively (and in some cases, enthusiastically) exploit youth. One vendor had no qualms about selling a list of 14 and 15 year old girls for targeting around family planning services. The following quotation is from a sales representative responding to an inquiry from a researcher:

I know that your target audience was fourteen and fifteen year old girls for family planning services. I can definitely do the list you’re looking for -- I just have a couple more questions.

The study also highlights that, even for a motivated and informed research team, uncovering details about where data is collected from is often not possible. Companies have no legal obligation to disclose this information, and therefore, they don't. The observations of the research team dovetail with my firsthand experience researching similar issues. Unless there is a clear and undeniable legal reason for a company to disclose a specific piece of information, many companies will stonewall, obfuscate, or outright refuse to be transparent.

The study also emphasizes two of the elephants in the room regarding the privacy of students and youth: both FERPA and COPPA have enormous loopholes, and it's possible to be fully compliant with both laws and still do terrible things that erode privacy. The study covers some high level details, and as I've described in the past, FERPA directory information is valuable information.

The study also highlights the role of state level laws like SOPIPA. SOPIPA-style laws have been passed in multiple states nationwide, starting in California. This might actually feel like progress. However, when one stops and realizes that there have been a grand total of zero sanctions under SOPIPA, it's hard to escape the sense that some regulations are more privacy theater than privacy protection. While a strict count of sanctions under SOPIPA is a blunt measure of effectiveness, the lack of regulatory activity under SOPIPA since the law's passage either indicates that all the problems identified in SOPIPA have been fixed (hah!) or that the impact of the regulation is nonexistent. If a law passes and it's not enforced, what is the impact?

The report also notes that the data collected, shared, and/or sold goes far beyond simple contact information. The report details that one vendor collects information on a range of physical and mental health issues, family history regarding domestic abuse, and immigration status.

One bright spot in the report is that, among the small number of school districts that responded to the researcher's requests for information, none appeared to be selling or sharing student information to advertisers. However, even this bright area is undermined by the small number of districts surveyed, and the fact that some districts took over a year to respond, and with at least one district not responding at all.

The report details the different ways that school-age youth are profiled by data brokers, with their information sold to support targeted advertising. While the report doesn't emphasize this, we need to understand profiling and advertising as separate but unrelated issues. A targeted ad is an indication that profiling is occurring; profiling is an indication that data collection from or about students is occurring -- but we need to address the specific problems of each of these elements distinctly. Advertising, profiling (including combining data from multiple sources), and data collection without clearly obtained informed consent are each distinct problems that should be understood both individually and collectively.

If you work with youth (or, frankly, if you care about the future and want to add a layer of depth to how you understand information literacy) the report should be read multiple times, and shared and discussed with your colleagues. I strongly encourage this as required reading in both teacher training programs, and as back to school reading for educators in the fall of 2018.

But, taking a step back, the implications of this report shine a light on serious holes in how we understand "student" data. The report also demonstrates how the current requirement that a person be able to show a demonstrable harm from misuse of personal information is a sham. Moving forward, we need to refine and revise how we discuss misuse of information.

Many of the problems and abuses arise from systemic and entrenched lack of transparency. As demonstrated in the report:

It is difficult for parents and students to obtain specificity on data sources with an email, a phone call, or an internet search. From the perspective of parents and students, there is no data trail. Likewise, parents and students are generally unable to know how and why certain student lists were compiled or the basis for designating a student as associated with a particular attribute. Despite all of this, student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services.

This is what information asymetry looks like, and it mirrors multiple other power imbalances that stack the deck against those with less power. As documented in multiple places in the survey, a team of skilled researchers with legal, educational, and technical expertise were not able to pierce the veil of opacity maintained by data brokers and advertisers. It is both unrealistic and unethical to expect a person to be able to demonstrate harm from the use of specific data elements when the companies in a position to do the harm have no requirement to explain anything about their practices, including what data they used and how they obtained it.

But taking an additional step back, the report calls into question what we consider "student" data. The marketplace for data on school age people looks a lot like the market for people who are past the traditional school age: a complete lack of transparency about how the data are gathered, sold, used, and retained. It feels worse with youth because adults are somehow supposed to know better, but this is a fallacy. When we turn 18, or 21, or 35, or 50, we aren't magically given a guidebook about how data brokers and profiling work. The information asymmetry documented in the Fordham report is the same for adults as it is for youth. Both adults and youth face comparable problems, but the injustice of the current systems are more obvious when kids are the target.

Companies collect data about people, and some of the people happen to be students. Possibly, some of these data might have been collected within an educational context. But, even if the edtech industry had airtight privacy and security, multiple other sources for data about youth exist. Between video games, health-related data breaches (which often contain data about youth and families in the breached records), Disney and comparable companies, Equifax, Experian, Transunion, Axciom, Musical.ly, Snapchat, Instagram, Facebook Messenger, parental oversharing on social media, and publicly available data sources, there is no shortage of readily available data about youth, their families, and their demographics. When we pair that with technology companies (both inside and outside edtech) going out of business and liquidating their data as part of the bankruptcy process, the ability to get information about youth and their families is clearly not an issue.

It's more accurate to say data that have been collected on people who are school age. To be very clear, data collected in a learning environment is incredibly sensitive, and deserves strong protections. But drawing a line between "educational" data and everything else misses the point. Non-educational data can be used to do the same types of redlining as educational data. If we claim to care about student privacy, then we need to do a better job with privacy in general.

This is what is at stake when we talk about the need to limit our ISPs from selling our web browsing history, our cellular providers from selling our usage information -- including precise information, in real time, about our location. What we consider student data is tied up in the data trails of their parents, friends, relatives, colleagues -- information about a younger sister is tied to that of her older siblings. Privacy isn't an individual trait. We are all in this together.

Read the study. Share the study. It's important work that helps quantify and clarify issues related to data privacy for adults and youth.

Privacy Postcard: Starbucks Mobile App

2 min read

For more information about Privacy Postcards, read this post.

General Information

App permissions

The Starbucks app has permissions to read your contacts, and to get network location and location from GPS.

Starbucks app permissions

Access contacts

The application permissions indicate that the app can access contacts, and this is reinforced in the privacy policy.

600

Law enforcement

Starbucks terms specify that they will share data if sharing the information is required by law, or if sharing information helps protect Starbuck's rights.

Starbucks law enforcement

Location information and Device IDs

Starbucks can use location as part of a broader user profile.

Starbucks collects location info

Data Combined from External Sources

The terms specify that Starbucks can collect, store, and use information about you from multiple sources, including other companies.

Starbucks data collection

Third Party Collection

The terms state that Starbucks can allow third parties to collect device and location information.

Third party

Social Sharing or Login

The terms state that Starbucks facilitates tracking across multiple services.

Social sharing

Summary of Risk

The Starbucks mobile app has several problematic areas. Individually, they would all be grounds for concern. Collectively, they show a clear lack of regard for the privacy of people who use the Starbucks app. The fact that the service harvests contacts, and harvests location information, and allows selected information to be used by third parties to profile people creates significant privacy risk.

People shouldn't have to sell out their contact list and share their physical location to get a cup of coffee. I love coffee as much as the next person, but avoid the app (and maybe go to a local coffee shop), pay cash, and tip the barista well.

Privacy Postcards, or Poison Pill Privacy

10 min read

NOTE: While this is obvious to most people, I am restating this here for additional emphasis: this is my personal blog, and only represents my personal opinions. In this space, I am only writing for myself. END NOTE.

I am going to begin this post with a shocking, outrageous, hyperbolic statement: privacy policies are difficult to read.

Shocking. I know. Take a moment to pull yourself up from the fainting couch. Even Facebook doesn't read all the necessary terms. Policies are dense, difficult to parse, and in many cases appear to be overwhelming by design.

When evaluating a piece of technology, "regular" people want an answer to one simple question: how will this app or service impact my privacy?

It's a reasonable question, and this process is designed to make it easier to get an answer to that question. When we evaluate the potential privacy risks of a service, good practice can often be undone by a single bad practice, so the art of assessing risk is often the art of searching for the poison pill.

To highight that this process is both not comprehensive and focused on surfacing risks, I'm calling this process Privacy Postcards, or Poison Pill Privacy - it is not designed to be comprehensive, at all. Instead, it is designed to highlight potential problem areas that impact privacy. It's also designed to be straightforward enough that anyone can do this. Various privacy concerns are broken down, and include keywords that can be used to find relevant text in the policies.

To see an example of what this looks like in action, check out this example. The rest of this post explains the rationale behind the process.

If anyone reading this works in K12 education and you want to use this with students as part of media literacy, please let me know. I'd love to support this process, or just hear how it went and how the process could be improved

1. The Process

Application/Service

Collect some general information about the service under evaluation.

  • Name of Service:
  • Android App
  • Privacy Policy url:
  • Policy Effective Date:

App permissions

Pull a screenshot of selected app permissions from the Google Play store. The iOS store from Apple does not support the transparency that is implemented in the Google Play store. If the service being evaluated does not have a mobile app, or only has an iOS version, skip this step.

The listing of app permissions is useful because it highlights some of the information that the service collects. The listing of app permissions is not a complete list of what the service collects, nor does it provide insight into how the information is used, shared, or sold. However, the breakdown of app permissions is a good tool to use to get a snapshot of how well or poorly the service limits data collection to just what is needed to deliver the service.

Access contacts

Accessing contacts from a phone or address book is one way that we can compromise our own privacy, and the privacy of our friends, family, and colleagues. This can be especially true for people who work in jobs where they have access to sensitive information or priviliged information. For example, if a therapist had contact information of patients stored in their phone and that information was harvested by an app, that could potentially compromise the privacy of the therapist's clients.

When looking at if or how contacts are accessed, it's useful to cross-reference what the app permissions tell us against what the privacy policy tells us. For example, if the app permissions state that the app can access contacts and the privacy policy says nothing about how contacts are protected, that's a sign that the privacy policy could have areas that are incomplete and/or inadequate.

Keywords: contact, friend, list, access

Law enforcement

Virtually every service in the US needs to comply with law enforcement requests, should they come in. However, the languaga that a service uses about how they comply with law enforcement requests can tell us a lot about how a service's posture around protecting user privacy.

Additionally, is a service has no language in their terms about how they respond to law enforcement or other legal requests, that can be an indicator that the terms have other areas where the terms are incomplete and/or inadequate.

Keywords: legal, law enforcement, comply

Location information and Device IDs

As individual data elements, both a physical location and a device ID are sensitive pieces of information. It's also worth noting that there are multiple ways to get location information, and different ways of identifying an individual device. The easiest way to get precise location information is via the GPS functionality in mobile devices. However, IP addresses can also be mapped to specific locations, and a string of IP addresses (ie, what someone would get if they connected to a wireless network at their house, a local coffee shop, and a library) can give a sense of someone's movement over time.

Device IDs are unique identifiers, and every phone or tablet has multiple IDs that are unique to the device. Additionally, browser fingerprinting can be used on its own or alongside other IDs to precisely identify an individual.

The combination of a device ID and location provides the grail for data brokers and other trackers, such as advertisers: the ability to tie online and offline behavior to a specific identity. Once a data broker knows that a person with a specific device goes to a set of specific locations, they can use that information to refine what they know about a person. In this way, data collectors build and maintain profiles over time.

Keywords: location, zip, postal, identifier, browser, device, ID, street, address

Data Combined from External Sources

As noted above, if a data broker can use a device ID and location information to tie a person to a location, they can then combine information from external sources to create a more thorough profile about a person, and that person's colleagues, friends, and families.

We can see examples of data recombination in how Experian sorts humans into classes: data recombination helps them identify and distinguish their "Picture Perfect Families" from the "Stock cars and State Parks" and the "Urban Survivors" and the "Small Towns Shallow Pockets".

And yes, the company combining this data and making these classifications is the same company that sold data to an identity thief and was responsible for a breach affecting 15 million people. Data recombination matters, and device identifiers within data sets allow companies to connect disparate data sources into a larger, more coherent profile.

Keywords: combine, enhance, augment, source

Third Party Collection

If a service allows third parties to collect data from users of the service, that creates an opportunity for each of these third parties to get information about people in the ways that we have described above. Third parties can access a range of information (such as device IDs, browser fingerprints, and browsing histories) about users on a service, and frequently, there is no practical way for people using a service to know what third parties are collecting information, or how these third parties will use it.

Additionally, third parties can also combine data from multiple sources.

Keywords: third, third party, external, partner, affiliate

Social Sharing or Login

Social Sharing or Login, when viewed through a privacy lens, should be seen as a specialized form of third party data collection. With social login, however, information about a person can be exchanged between the two services, or taken from one service.

Social login and social sharing features (like the Facebook "like" button, a "Pin it" link, or a "Share on Twitter" link) can send tracking information back to the home sites, even if the share never happens. Solutions like this option from Heise highlight how this privacy issue can be addressed.

Keywords: login, external, social, share, sharing

Education-specific Language

This category only makes sense on services that are used in educational contexts. For services that are only used in a consumer context, this section might be superfluous.

As noted below, I'm including COPPA in the list of keywords here even though COPPA is a consumer law. Because COPPA (in the US) is focused on children under 13, there are times when COPPA connects with educational settings.

Keywords: parent, teacher, student, school, , family, education, FERPA, child, COPPA

Other

Because this list of concerns is incomplete, and there are other problematic areas, we need a place to highlight these concerns if and when they come up. When I use this structure, I will use this section to highlight interesting elements within the terms that don't fit into the other sections.

If, however, there are elements in the other sections that are especially problematic, I probably won't spend the time on this section.

Summary of Risk

This section is used to summarize the types of privacy risks associated with the service. As with this entire process, the goal here is not to be comprehensive. Rather, this section highlights potential risk, and whether those risks are in line with what a service does. IE, if a service collects location information, how is that information both protected from unwarranted use by third parties and used to benefit the user?

2. Closing Notes

At the risk of repeating myself unnecessarily, this process is not intended to be comprehensive.

The only goal here is to streamline the process of identify and describing poison pills buried in privacy policies. This method of evaluation is not thorough. It will not capture every detail. It will even miss problems. But, it will catch a lot of things as well. In a world where nothing is perfect, this process will hopefully prove useful.

The categories listed here all define different ways that data can be collected and used. One of the categories explicitly left out of the Privacy Postcard is data deletion. This is not an oversight; this is an intentional choice. Deletion is not well understood, and actual deletion is easier to do in theory than in practice. This is a longer conversation, but the main reason that I am leaving deletion out of the categories I include here is that data deletion generally doesn't touch any data collected by third party adtech allowed on a service. Because of this, assurances about data deletion can often create more confusion. The remedy to this, of course, is for a service to not use any third party adtech, and to have strict contractual requirements with any third party services (like analytics providers) that restrict data use. Many educational software providers already do this, and it would be great to see this adopted more broadly within the tech industry at large.

The ongoing voyage of MySpace data - sold to an adtech company in 2011, re-sold in 2016, and breached in 2016 - highlights that data that is collected and not deleted can have a long shelf life, completely outside the context in which it was originally collected.

For those who want to use this structure to create your own Privacy Postcards, I have created a skeleton structure on Github. Please, feel free to clone this, copy it, modify it, and make it your own.

Dark Patterns when Deleting an Account on Facebook

3 min read

By default, Facebook makes it more complicated than it needs to be to delete an account. Their default state is to have an account be deactivated, but not deleted.

However, both the deactivation and deletion process can be undone if a person logs back into Facebook.

To make matters worse, to fully delete an account, a person needs to make a separate request to Facebook to start the account deletion process. Facebook splits the important information across two separate pages, which further complicates the process of actually deleting an account. The main page for deleting an account has some pretty straightforward language.

However, this language is undercut by the information on the . page that describes the difference between deactivating and deleting an account.

Some key details from the second page that are omitted from the main page on deleting an account include this gem:

We delay deletion a few days after it's requested. A deletion request is cancelled if you log back into your Facebook account during this time.

This delay is critical, and the fact that it can be undone is also something that needs additional attention.

Facebook further clarifies what they consider "logging in" in a third, separate page, where they describe deactivating an account.

If you’d like to come back to Facebook after you’ve deactivated your account, you can reactivate your account at anytime by logging in with your email and password. Keep in mind, if you use your Facebook account to log into Facebook or somewhere else, your account will be reactivated.

While Facebook's instructions aren't remotely as clear as they should be, the language they use here implies that an account deletion request can be undone if a person logs in (or possibly just uses a service with an active Facebook login) at any point during the "few days" after a person has requested their account deletion. It's also unclear what this means if someone logs into Messenger. And, of course, the avcerage person will never know that their Facebook account hasn't been deleted because they won't be going back to Facebook to check.

My recommendations here for people looking to leave Facebook:

  • First, identify any third party services where you use Facebook login. If possible, migrate those accounts to a separate login unconnected from Facebook.
  • Second, delete the Facebook app from all mobile devices.
  • Third, using the web UI on a computer, request account deletion from within Facebook.
  • Fourth, install an ad blocker so Facebook has a harder time tracking you via social share icons.