Archive for the ‘Data’ Category

What Is A Chair?

Saturday, February 27th, 2016

I’m currently reading a book called “Sorting Things Out” about how categorization and standards shape our world. I highly recommend it. One of the early examples in the book made me think about the way we code algorithms to categorize different things, specifically the difficulties in categorizing people and the consequences such categorizations have on them.

Imagine you have to write a program that recognizes whether an object is a chair (ignore the complexities of computer vision and such, just stay with me for a bit). You could code a simple set of binary checks like does the object have four legs? Does it have a flat surface at the end of those four legs? If it meets all your criteria, you could say the object was indeed a chair. But what a bench? It meets the criteria we’ve set up, but it’s technically not a chair, it’s a bench. Is a bench a subset of the chair category?  What about a tree stump in the woods? That is most certainly not a chair, but you can definitely sit on it. The tree stump then calls into question the whole purpose of making the categorization in the first place. Are you trying to sort items in a warehouse or are you just trying to find a place to sit down? The dilemma lies in whether you create a strict set of criteria that could exclude some items or you leave your rules lax and risk polluting your chair population with items such as tables.

Making the distinction between a chair and not a chair is very easy for humans to do, but it’s very difficult for software, especially if the purpose of the chair question is to determine whether you can comfortably sit down or not. Mistaking a table for a chair is benign enough, but algorithms often deal with people where mistakes can have life-altering consequences. Increasingly common are algorithms which decide whether a person gets approved for a loan, or a person’s prison sentence, their legal status as an immigrant, or whether they are a good match for a particular position at a company. Inevitably there will be people stuck in that fuzzy area. What happens then? Any person that deals with data about people needs to ask themselves what happens when their algorithm fails to make a correct determination.

“Social Physics: How Good Ideas Spread–The Lessons From a New Science” – Alex Pentland

Monday, November 10th, 2014

I started reading Alex “Sandy” Pentland’s book, Social Physics. Several things interest me about this book. I’m very interested in how society behaves in today’s world where we are increasingly connected to more people by weak social ties. Also interesting is that advances in data collection and analysis are bound to reach a point where we can continuously monitor and analyze people’s behavior. Who will have this knowledge? How will they use it? What will this world look like? Lastly, I’m interested on how good ideas spread and how that can help us design better organizations and institutions.

Alex Pentland thinks it is possible to create a mathematical explanation about why society behaves the way it does. He calls this discipline, social physics.

“Social physics is a quantitative social science that describes reliable, mathematical connections between information and idea flow on the one hand and people’s behavior on the other. Social physics helps us understand how ideas flow from person to person through the mechanism of social learning and how this flow of ideas ends up shaping the norms, productivity, and creative output of our companies, cities, and societies. “

The goal of applying this science to society is to shape outcomes. Pentland believes we can create systems that build a society “better at avoiding market crashes, ethnic and religious violence, political stalemates, widespread corruption, and dangerous concentration of power.”

All of this would sound great, if it didn’t sound kind of scary. There are a lot of concerns about privacy, which Pentland addresses, and which I’m sure he’ll talk more about in the coming chapters. However, even if he is able to get around the privacy issues, the ability to affect how society behaves would give whoever has the ability to do so great power. This is perhaps a little paranoid on my part, but I don’t think misusing the ability to “fix” society, as he puts it, is out of the question. Pentland does write about it:

“This vision of a data-driven society implicitly assumes the data will not be abused. The ability to see the details of the market, political revolutions, and to be able to predict and control them is a case of Promethean fire—it could be used for good or for ill.”

My second concern is best summarized by Nicholas Carr in his article in “The Limits of Social Engineering”.

“Pentland may be right that our behavior is determined largely by social norms and the influences of our peers, but what he fails to see is that those norms and influences are themselves shaped by history, politics, and economics, not to mention power and prejudice. People don’t have complete freedom in choosing their peer groups. Their choices are constrained by where they live, where they come from, how much money they have, and what they look like. A statistical model of society that ignores issues of class, that takes patterns of influence as givens rather than as historical contingencies, will tend to perpetuate existing social structures and dynamics. It will encourage us to optimize the status quo rather than challenge it.” (h/t to Cathy O’Neil for linking to this piece).

The case studies in the book so far take place in groups where this might not be a huge issue like eToro, an online trading and investment network. Carr’s (and my) concern may not be a huge issue in these scenarios especially because Pentland is measuring very specific metrics like return on investments. However I do believe there is real danger in applying this sort of analyses in places like, say Ferguson, MO. It will be interesting to read the different case studies and to try and identify places where this concern might arise.

It would be very unfair of me to end this without writing about the actual focus of the book (although I’m already a little nauseous fro writing this on the train). The book will focus on the two most important concepts of social physics: idea flow within social networks and social learning, that is, how we take these new ideas and turn them into habit and how learning can be accelerated and shaped by social pressure.

I like to believe that there are better systems of collaboration and cooperation that can make organizations more effective, communities more resilient, and authorities more accountable. Elinor Ostrom developed her work on governing the commons by studying how communities behaved around issues like irrigation and water management. Similarly, I do think Pentland’s insights on idea flow and social learning can help us understand how to design better organizations, communities, and institutions.

The Dangers of Evidence-Based Sentencing

Monday, October 27th, 2014
Note: This post was originally published on mathbabe.org and cross-posted on thegovlab.org.

What is Evidence-based Sentencing?

For several decades, parole and probation departments have been using research-backed assessments to determine the best supervision and treatment strategies for offenders to try and reduce the risk of recidivism. In recent years, state and county justice systems have started to apply these risk and needs assessment tools (RNA’s) to other parts of the criminal process.

Of particular concern is the use of automated tools to determine imprisonment terms. This relatively new practice of applying RNA information into the sentencing process is known as evidence-based sentencing (EBS).

What the Models Do

The different parameters used to determine risk vary by state, and most EBS tools use information that has been central to sentencing schemes for many years such as an offender’s criminal history. However, an increasing amount of states have been utilizing static factors such as gender, age, marital status, education level, employment history, and other demographic information to determine risk and inform sentencing. Especially alarming is the fact that the majority of these risk assessment tools do not take an offender’s particular case into account.

This practice has drawn sharp criticism from Attorney General Eric Holder who says “using static factors from a criminal’s background could perpetuate racial bias in a system that already delivers 20% longer sentences for young black men than for other offenders.” In the annual letter to the US Sentencing Commission, the Attorney General’s Office states that “utilizing such tools for determining prison sentences to be served will have a disparate and adverse impact on offenders from poor communities already struggling with social ills.” Other concerns cite the probable unconstitutionality of using group-based characteristics in risk assessments.

Where the Models Are Used

It is difficult to precisely quantify how many states and counties currently implement these instruments, although at least 20 states have implemented some form of EBS. Some of the states or states with counties that have implemented some sort of EBS (any type of sentencing: parole, imprisonment, etc) are: Pennsylvania, Tennessee, Vermont, Kentucky, Virginia, ArizonaColorado, California, Idaho, Indiana, Missouri, Nebraska, Ohio, Oregon, Texas, and Wisconsin.

The Role of Race, Education, and Friendship

Overwhelmingly states do not include race in the risk assessments since there seems to be a general consensus that doing so would be unconstitutional. However, even though these tools do not take race into consideration directly, many of the variables used such as economic status, education level, and employment correlate with race. African-Americans and Hispanics are already disproportionately incarcerated and determining sentences based on these variables might cause further racial disparities.

The very socioeconomic characteristics such as income and education level used in risk assessments are the characteristics that are already strong predictors of whether someone will go to prison. For example, high school dropouts are 47 times more likely to be incarcerated than people in their similar age group who received a four-year college degree. It is reasonable to suspect that courts that include education level as a risk predictor will further exacerbate thesedisparities.

Some states, such as Texas, take into account peer relations and considers associating with other offenders as a “salient problem”. Considering that Texas is in 4th place in the rate of people under some sort of correctional control (parole, probation, etc) and that the rate is 1 in 11 for black males in the United States it is likely that this metric would disproportionately affect African-Americans.

Sonja Starr’s paper

Even so, in some cases, socioeconomic and demographic variables receive significant weight. In her forthcoming paper in the Stanford Law Review, Sonja Starr provides a telling example of how these factors are used in presentence reports. From her paper:

For instance, in Missouri, pre-sentence reports include a score for each defendant on a scale from -8 to 7, where “4-7 is rated ‘good,’ 2-3 is ‘above average,’ 0-1 is ‘average’, -1 to -2 is ‘below average,’ and -3 to -8 is ‘poor.’ Unlike most instruments in use, Missouri’s does not include gender. However, an unemployed high school dropout will score three points worse than an employed high school graduate—potentially making the difference between “good” and “average,” or between “average” and “poor.” Likewise, a defendant under age 22 will score three points worse than a defendant over 45. By comparison, having previously served time in prison is worth one point; having four or more prior misdemeanor convictions that resulted in jail time adds one point (three or fewer adds none); having previously had parole or probation revoked is worth one point; and a prison escape is worth one point. Meanwhile, current crime type and severity receive no weight.

Starr argues that such simple point systems may “linearize” a variable’s effect. In the underlying regression models used to calculate risk, some of the variable’s effects do not translate linearly into changes in probability of recidivism, but they are treated as such by the model.

Another criticism Starr makes is that they often make predictions on an individual based on averages of a group. Starr says these predictions can predict with reasonable precision the average recidivism rate for all offenders who share the same characteristics as the defendant, but that does not make it necessarily useful for individual predictions.

The Future of EBS Tools

The Model Penal Code is currently in the process of being revised and is set to include these risk assessment tools in the sentencing process. According to Starr, this is a serious development because it reflects the increased support of these practices and because of the Model Penal Code’s great influence in guiding penal codes in other states. Attorney General Eric Holder has already spoken against the practice, but it will be interesting to see whether his successor will continue this campaign.

Even if EBS can accurately measure risk of recidivism (which is uncertain according to Starr), does that mean that a greater prison sentence will result in less future offenses after the offender is released? EBS does not seek to answer this question. Further, if knowing there is a harsh penalty for a particular crime is a deterrent to commit said crime, wouldn’t adding more uncertainty to sentencing (EBS tools are not always transparent and sometimes proprietary) effectively remove this deterrent?

Even though many questions remain unanswered and while several people have been critical of the practice, it seems like there is great support for the use of these instruments. They are especially easy to support when they are overwhelmingly regarded as progressive and scientific, something Starr refutes. While there is certainly a place for data analytics and actuarial methods in the criminal justice system, it is important that such research be applied with the appropriate caution. Or perhaps not at all. Even if the tools had full statistical support, the risk of further exacerbating an already disparate criminal justice system should be enough to halt this practice.

Both Starr and Holder believe there is a strong case to be made that the risk prediction instruments now in use are unconstitutional. But EBS has strong advocates, so it’s a difficult subject. Ultimately, evidence-based sentencing is used to determine a person’s sentencing not based on what the person has done, but who that person is.

De-anonymizing open data, just because you can… should you?

Thursday, October 23rd, 2014

If an essential part of the data reveals personally identifiable information (PII), should the data not be released? Should the users of open data be the ones responsible for ensuring proper use of the data?

I mention this question because of an article by an intrepid Gawker reporter who decided he could correlate photos of celebrities in NYC taxis (with visible Taxi medallions) and the de-anonymized database on every NYC cab ride in 2013 to determine whether celebrities tipped their cab drivers. Of course, this article is another example of “Celebrities doing normal people things like using taxis”, but the underlying question here is just because you can violate people’s privacy does it mean you should?

Identifying celebrities and their cab rides was first done by an intern at Neustar, Anthony Tockar. In his post he recognizes that it is relatively easy to reveal personal information about people. Not only could he match cab rides to a couple of celebrities, but he also showed how you can easily see who frequently visits Hustler’s. Tockar says:

Now while this information is relatively benign, particularly a year down the line, I have revealed information that was not previously in the public domain.

He uses these examples to introduce a method of privatizing data called “differential privacy.” Differential privacy basically adds noise to the data when you zoom in on it so you can’t identify specific information about an individual, but you can still get accurate results when you look at the data as a whole. This is best exemplified by the graphic below.

This shows the average speed of cab drivers throughout the day. The top half is the actual average speed of all drivers and the average speed of all drivers after the data is run through the differential privacy algorithm. The bottom half shows the same for an individual cab driver. Click on the graphic to go to an interactive tool that lets you play around with the privacy parameter, ε.

But we’re still struggling with getting data off PDF’s or worse, filing cabinets. It’ll take years before we can create such privacy mechanisms for current open data! What to do in the meantime? It would seem that Gawker stopped reading after “Bradley Cooper left no tip” (actually, we don’t know since tips are not recorded if paid in cash). Just because someone could look up ten celebrities’ cab rides does it mean they should have? The reporter even quotes Tockar’s quote about “revealing information not previously in the public domain”. The irony seems to have been lost on Gawker. I’m of the opinion that Gawker shouldn’t have published an article about celebrities’ cab rides no more than it should publish their phone numbers if they were available inside a phone book. Unless it was trying to make a point about privacy and open data, which would’ve made for a great conversation piece.  Except it wasn’t since it was all about tipping. They even reached out to publicists for comments on the tipping.

Ultimately, who cares about Bradley Cooper taking a taxi. But when you go “hey, let’s see how many celebrities I can ID from this data” and write an article about it without questioning the privacy implications, you’re basically saying “Yes, because you can, it means you should.”

UPDATE: ok, so apparently there is a reason it’s called “Gawker”. See this example where this same author tries to out a Fox News reporter. Today I learned.

What One Database Marketing Company Knows About Me

Sunday, September 8th, 2013

It’s no surprise that marketing companies gather data about you to sell off to advertisers who then deliver targeted ads via mail, email, or while you surf the internet. Sometimes it’s even creepy how much they know about you. So far, it’s been a bit of a mystery finding out exactly how much of your information these companies have. A few days ago one marketing technology company, Acxiom, launched a new service called AboutTheData.com which allows people to take a peek into how much information the company has gathered on them.  Acxiom is no small marketing company. According to the NYTimes, it has created the world’s largest commercial database on consumers. I decided to give the service a try to see just how much data this company had about me.

Since this is such a large company, and I’m such an active internet user, I expected to find Acxiom to have gathered a lot of information about me. I was slightly disappointed–or relieved–when I found out that they didn’t have that much information on me at all (honestly, I don’t know how I should feel about this). Before going into the data, here is a little more information about where this data comes from and what we are shown.

According to Acxiom, this data is collected from:

  • Government records, public records and publicly available data – like data from telephone directories, website directories and postings, property and assessor files, and government issued licenses
  • Data from surveys and questionnaires consumers fill out
  • General data from other commercial entities where consumers have received notice of how data about them will be used, and offered a choice about whether or not to allow those uses – like demographic data

The data they show us, is their “core data”. This data is used to to generate the modeled insights and analytics used for marketing, which they do not show. Acxiom says that we are shown all of their core data. They make no mention about whether there is other non-core, non modeled insights data.

The site allows you to view data from six categories categories. Below is the information that has been gathered on me. Economic and Shopping data is over the past 24 months.

Characteristic Data: Male, Hispanic, inferred single
Home Data: No data.
Vehicle Data: No data.
Economic Data: Regular credit card holder (as opposed to Preimum/Gold), Regular Visa, 2 cash purchases (includes checks), 1 Visa purchase.
Shopping Data: $139 spent on 3 purchases (the ones referred to above?), 2 offline totalling $100, average $50 each (one purchase < $50, the other >$50, so I guess it’s a coincidence they add up to $100), 1 online for $39. My supposed interests include books, magazine, Christmas gift purchase, ethnic products (??), lifestyles, interests, and passions.
Households Interests Data: No data.

It makes sense that there is not be a lot of information about my home data or vehicle data, since I currently own neither (although there was no info on my previous vehicle ownership). Perhaps car and homeowners would have these sections filled out entirely. The household interests category is meant to include data related to interests of me or people in my household (examples given from the site include: gardening, traveling, sports). Not so surprised this is also empty, but I’m not sure why they guess that my shopping interests include ethnic products and yet they are not able to guess that I enjoy traveling. As for Characteristic Data? My Twitter feed should be enough to reveal that I’m a single male hispanic. Since you have to provide your name, email, address, and last 4 digits of your SSN, it’s pretty safe to assume that they also have this information.

**To skip Luis’ short history of shopping, jump to the next paragraph.
Economic and Shopping Data provide a little more hints as to where the data are coming from. First of all, they only have three purchases. That’s it. Out of the 3,100 card/check purchases I’ve made over the past 24 months, they have 3. I tried looking for two offline purchases on my Mint which add up to $100, but this proved to be a very difficult exercise. Even after filtering offline purchases and sorting data, there were too many possible combinations. For now, those two offline purchases remain a mystery. I was able to find a suspect for the online payment of $39. The most suspicious purchase came from a $39 seat upgrade at United Airlines. I can’t be sure if this is the one since I happened to buy a $39 upgrade, plus a plane ticket which does not show up in my AboutTheData. However, my suspicion arises from the fact that Mint had prepared a targeted ad for me by placing a green flashy dollar sign next to the purchase. This also could’ve been a coincidence.

Conclusions/Best Guesses
Given the fact that I spend A LOT of time on the internet and the high amount of purchases I’ve made over the years (I should cut down on those), I am surprised that Acxiom does not have more data about me. Basically, they know I’m a single, male, hispanic, and that’s about it. I can’t possibly imagine what they could gather from the rest of my data that’s worth $$$ to advertisers. Additionally, it seems a lot of their data comes from publicly available government data sets (home and car ownership), and–at least in my case–not a lot of data comes from neither my online habits or my shopping habits. I presume most of my important data is owned by Facebook and Google, and I’m pretty confident that they do not sell/share my data with Acxiom.

Last thought: AboutTheData let’s you edit your data so that you can receive more accurate targeted advertising. I’m curious to know who uses Acxiom data to target me, so I would’ve loved to enter distinctive preferences that do not apply to me (yet) such as “pregnancy”, “colonoscopies”, “underwater basket weaving”, or “Cook Islands National Women’s Football League” to see where these ads pop up. Unfortunately, AboutTheData only lets you change the above mentioned interests to ‘true’ or ‘false’. I guess they thought about the trolls.