Everything Except the Worthwhile

In a 1968 speech, Robert F. Kennedy said that “[GDP] measures everything… except that which makes life worthwhile.” I think this statement was, and is, still deeply relevant. Both here in the U.S. and across the world GDP (Gross Domestic Product) is often used as a metric of a country and its citizens’ well-being. And it seems almost hardwired to associate a high GDP with a good economy and a good life. But what exactly is GDP and what might be some alternatives to it?

In technical terms, GDP measures the total value of final goods and services produced within a country’s borders and is defined as the net consumption by households, investments by businesses, government purchases, and purchases made by foreigners (Kurtzleben GDP). GDP is generally what people refer to when they say that one economy is larger or smaller than another. When an astute economist posits “the Chinese economy will surpass that of the United States by 2050” they are probably referring to growth as increase in GDP. I don’t pretend to think anyone believes that GDP is a perfect measure of a nation’s well being, any good data miner/harvester/observer knows that any attempt to measure something as large as the “health of an economy” will miss something. And for what it is supposed to measure, the GDP is great.

Returning to RFK’s statement from above, I wonder if measuring the material output of a nation isn’t the most important to measure. I mean, if the entire workforce of the U.S. was replaced by advanced humanoid robots with no need for food, sleep, or a living wage, I bet the GDP would be pretty high but it wouldn’t account for the millions of people who would be left destitute. So what are some alternative ways of measuring the well-being of a country?

One of the more prominent alternatives to GDP is what is referred to as Gross National Happiness or GNH. GNH is a metric used by Bhutan since the 1970s (Kurtzleben Proposals) to measure the mental, rather than material, well-being. Unlike GDP, GNH is not something that is calculated with objective analytics of economic output. Instead, the government of Bhutan issues a national survey that measures the well-being of its citizens across several areas from education to satisfaction with the government (Kurtzleben Proposals). This data is then supposed to be drive the nation’s policy (GNH Survey Report). I think this approach is really ambitious and falls quite neatly in line with measuring things that “make life worthwhile”. That said, I can imagine there might be some logistical and statistical issues in interpreting data that could be very subjective. Bhutan is, after all, a fairly small country. For the U.S. government, with its sprawling landmass, dozens of local education systems, disparate socio-economic classes, and population of over 300 million, a GNH would be an absolutely daunting task. In my opinion, this type of metric would be better implemented on a state level.

The Happy Planet Index, a project by the New Economics Foundation, is similar to the GNH. It is a measure of “sustainable well-being” comparing how efficiently the residents of a country use natural resources to achieve lives with high well-being (Happy Planet Methods). The HPI is calculated by comparing life expectancy, multiplied by experienced well-being as reported via survey, and the inequality of outcomes, all of which is divided by the country’s ecological footprint. This approach, combining both hard statistics and subjective survey responses, seems like it might be able to produce a more holistic summary of a country’s well-being and might be easier to conduct on a national scale.

If it wasn’t clear by now, I am fairly opposed to a reliance on measures of material production for reporting the well-being of a country. GDP is a measure of economic growth and it would be nice to have alternative measures more commonly reported. I find myself drawn especially to metrics that attempt to account for the inequities in life expectancy and self-reported happiness as there is a better chance this might better represent a greater proportion of the population. No metric is perfect, but I certainly think something like the Happy Planet Index has potential.

Sources Accessed

What does GDP measure?” – Danielle Kurtzleben

Are there other proposals for better ways to measure the economy?” – Danielle Kurtzleben

Gross National Happiness

Happy Planet Index Methods

Jumping the Electoral Shark

For months after the 2016 election, there seemed to be no shortage of articles and think pieces about how the national polling apparatus failed to predict the Trump win and how this failure was a result of inaccurate polling methods. And now, just over two years since the failure of these polls, the media is gearing up for 2020 will a host of predictive polls that read more like fanfiction than hard data (but hey, at least we know that Joe Biden has a seven point hypothetical lead on Trump).  So, before the storm of polls begin to dominate the 24/7 newsroom, I figured I would try and figure out how polls are conducted, how conclusions are drawn from them, and if we can even rely on polls at all. I’m sure I’ll answer all of these in a two page blog post.

The primary method used by pollsters and publications alike to conduct polling is described by Nate Cohn in his article “Why Did We Do the Poll the Way We Did“. This method, called “random digit dialing”, is exactly what it sounds like. Random phone numbers are dialed until someone picks up and participates in the poll. This is, of course, not the only way to conduct a poll. Cohn explains that for the titular poll in question, the team over at the New York Times utilized registered voter profiles to gather their phone numbers. Already we can see the problems inherent in conducting polling, not just political polls either. All polls depend on there being a willing population of pollees for the pollsters’ analysis to of any use. Even if a poll satisfies all the basic rules of modelling in statistics (Random, 10%, Success/Failure) there is no guarantee that it’s accurate, all data gathered from polls is various shades of inaccurate.

Analysis is a whole different issue. Was the population composed of all possible phone numbers or just registered voters? What was your line of questioning? Were your questions leading? Might the pollees have been giving you “polite” answers? And once you have your data, whose to say that the reality of the situation hasn’t already changed? It is, in short, an incredibly difficult. I think this fact is really put on display when you see hardcore analysts just as uncertain at this point as anyone else.

In a March 20th podcast, the folks over at FiveThirtyEight.com discussed Trump’s re-election. Some, like Geoffrey Skelley makes the claim that, at present, “econometrics models” place Trump as a possible favorite for  re-election. Others, Nate Silver specifically, are not so certain Trump’s economy can make up for the fact that his approval rating is sitting around 41% or 42%. The discussion goes around and around and never quite lends the listener a real sense of certainty.

Don’t get me wrong, there is still more than a year before anyone heads to the ballot box to cast a vote, so I don’t fault FiveThirtyEight or anyone else for not having some bold assertion about the result of the 2020 race. However, both politicians and voters rely on polls to make decisions, the former using them to determine if they should run to begin with and the latter deciding if their vote will be any good. But if decisions are being made based on faulty info, decisions can be made prematurely.

In the realm of polling, I find myself inclined to believe E. J. Dionne and Thomas E. Mann in saying that “Polling is a tool, not a principle“. I think that polls obviously have a necessary and permanent place in our political discourse. To suggest that polling should be done away with completely is far too naive (even for my tastes). What is important however, is that both politicians and citizens understand the deficiency in the polling apparatus.

Here’s Batman agreeing with me

Sources Accessed

“Polling and Public Opinion” – E. J. Dionne and Thomas E. Mann

Why Did We Do the Poll the Way We Did?” – Nate Cohn

“What Do We Know About Trump’s Re-Election Chances So Far?” – FiveThiryEight

Sola Notitia

Yuval Harari ends his 2017 work Homo Deus by laying out a new system of thinking that he calls the “Data Religion”.  At the heart of this religion are two central tenants: 1) that a Dataist out to “maximize data flow” by connecting as many producers and consumers of information as possible and 2) that everything must be “[linked] to the system”, whether it wants to or not (Harari 387).  The greatest sin a Dataist can commit is to prevent data from flowing freely.  This thinking parallels inclusive, non-violent moral and ethical frameworks like Buddhism and Christianity but, rather than acting for the sake of others or for the self, everything is done so that data may enter through all things.  Information, at least according to Harari, doesn’t just want to be free, it must be free.

Harari goes out of his way to illustrate how these principles of data operate in the real world.  According to Harari, it was God or the sacredness of individual liberties that allowed the United States to win the Cold War but rather the fact that the system of state communism in the USSR was not optimized for data flow.  Capitalism, in juxtaposition to state communism, “processes data by “directly connecting producers and consumers to one another” and allows them to communicate freely (Harari 374).   Therefore, free market capitalism is the most “sacred” system of economics (from a Dataist perspective at least).

And this brings me to the actual crux of this piece: I’m not so certain the current state of capitalism is Dataist at all.

In June 2018 the Obama-era regulations protecting net neutrality were removed, meaning that the FCC no longer had the ability to block internet service providers (ISPs) like Verizon or AT&T from putting data behind addition paywalls or throttle service to data that was favorable to the ISP in some way (Collins 2).  This was another step in favor a profit based approach to data.  This isn’t of course to say that the pre-2015 internet was the wild west that it used to be, but the dismantling of the federal legislation protecting net neutrality was another nail in the coffin of completely free moving data on the internet.  Now it would not be out of the question for ISPs to place services like Facebook and Spotify in bundles to be sold to the consumer (not unlike cable television).

A Dataist’s nightmare?  Internet bundles offered by a Portuguese ISP.

The fact of the matter is that capitalism, while connecting producers and consumers of data together freely, is driven by profit.  It’s unfortunate but you simply cannot blame ISPs for wanting to make additional money off a service they operate; it makes logical sense in context.  But this should be considered a grievous sin by any self-respecting Dataist.

 

So, are there any alternatives?  Is there anyone out there standing up for a Dataist worldview?

“No monopoly should be able to prevent works, tools, or ideas from: being freely used, expressed, exchanged, recombined, or taught; nor to violate individual privacy or human rights.  A creator’s right to be compensated for their work or ideas is only acceptable withing these limitations.” – U.S. Pirate Party platform

Enter the Pirate Party.  Like the Greens or the 4th International, the Pirate Party is an international movement that seeks to reconstruct the global economic system in favor of an open and free flow of information.  They do no seek to dismantle the free market or centralize the regulation of data but neither do they seek to monetize information.  Instead, their aim is to deregulate the flow of information as much as possible.  This includes the abolishment of copyrights, Digital Rights Management, and patents (Party Platform 2).   These changes might seem at odds with a capitalist ethic but I think synchronizes nicely with a Dataist one.

Sola Notitia

Sources Used:

The Rags to Riches to Rags Story

Despite my best efforts, I often have difficulty mustering sympathy for those who make millions of dollars annually for doing something I consider to be “useless”.  But over these last few weeks, as we have engaged with all manner of sociological theories I keep coming back to the question of why it is so many professional athletes go broke so quickly after leaving their career.

To begin this discussion I think it’s important to understand what exactly the numbers are.  According to an article by Chris Dudley, an estimated 60 percent of NBA players alone go broke within five years of leaving the league and a similar phenomena is seen in the NFL where approximately 78 percent of players go bankrupt after just two years (Dudley 1).  I feel as though it’s important to note here that, while Dudley does not cite specific information, the fact that someone brought it up in class discussion seems to indicate that it is somewhat reliable. These numbers, if accurate, are shocking. I can’t imagine this is something commonplace among other high-earning occupations.

So what exactly might be behind this?  Dudley’s own diagnosis is that professional athletes are the victims of predatory contracts, aggressive financial advisors who direct the players towards scams, and alienation from their wealth (Dudley 2-3).  This is also coupled with the fact that the actual earning period of professional athletes, unlike other high-earning occupations, is only a few years (Dudley 2). As a result of these compounding factors, professional athletes come into vast sums of money very quickly and are encouraged to use it while they have it only to discover that they didn’t save any for a rainy day.

I think this pattern of behavior among professional athletes is somewhat of a microcosm of James Coleman’s theory of social capital.  The two fundamental elements of social capital can be seen in the way professional athletes relate to those who they place in charge of their wealth.  Those two elements being both trustworthiness of the social environment and the extent of the obligations held (Coleman 2). As professional athletes come into tremendous sums of money, they turn to financial advisors to manage it, no doubt under the assumption that they manage with the athlete’s interests in mind, and the more money they make, the more they come to rely on their financial advisors.  

It is in this relationship that we can observe that social capital is often created and used in an imbalance.  Athletes, not wanting to be burdened with the direct management of their resources, place trust in their advisors but the advisors, seeking to make money for themselves, take the trust placed in them and run with it.  This isn’t, of course, to say that all financial managers are leeches (#notallfinancialmanagers) but it is certainly illustrative that Coleman’s theory is not always balanced. Just as some economic models presuppose rational action and equal distribution of information, it is easy to imagine that social capital operating in the same fashion.  But given the fact that people lie or cheat adds an entirely different wrench into the equation. Humans, both in physical and social markets, do not always act rationally or honestly.

Sources Accessed:

Money lessons learned from pro athletes’ financial fouls” – Chris Dudley

Human Capital and Social Capital” – James Coleman

Beating the Feynman Trap

One of the biggest downfalls of “Big Data” as it stands now is simply how broad, unorganized, and abstract it all seems to be.  In his 2019 article “The Exaggerated Promise of So-Called Unbiased Data Mining” Gary Smith highlights this problem, synthesizing it into something he calls the Feynman trap.  The Feynman trap, as defined by Smith, is the “ransacking [of] data for patterns without any preconceived idea of what one is looking for” (Smith). I can imagine it’s an easy trap to fall into.  How can we, researchers and society at large, leverage the technology we have when there is a near infinite amount of information to comb through?

The answer, in my humble history major opinion, resides in the definition of the Feynman trap itself.  As Smith says himself “good research begins with a clear idea of what one is looking for and expects to find [while data mining] just looks for patterns and inevitably finds some” (Smith).  So we have a “how”; In order for data mining to truly be useful, we need to be specific and targeted.

A great example of this kind of data mining can be found in a 2014 article titled “Data Mining Reveals How Conspiracy Theories Emerge on Facebook”.  In this study, researchers used data mining to analyze how much time users spent engaging with official media news outlets and alternative ones. The results indicated that, of a sample of 1 million Facebook users, the average amount of time spent engaging with mainstream news, official political channels, and alternative sources was approximately the same (MIT Tech Review).  I was amazed to find this study from 2014 dealing with a practical application for data mining that would be undoubtedly useful both now and in the future.

Lest we forget that there was most likely explicit meddling by foreign agents in both the 2016 American presidential election in addition to subsequent elections in Europe.  Imagine how useful data mining might be in observing and predicting the responses populations might have to this kind of interference. Imagine using data mining to counter or block attempts at digital meddling.  This kind of technology could be vital in securing the openness of information and political processes in the 21st century.

I’m sure people high above my pay-grade have already been considering these possibilities.  Or maybe not, after all, the study referred to in this blog is from 2014. Nevertheless, the point still stands.  If data mining wants to remain credible and useful in the rest of the 21st century, it needs to be able to move beyond its greedy roots.  Data can only be useful when given context. What does it matter if Bitcoin goes up if it rains in New England.  If data mining does not adapt, it may never escape the Feynman trap.

Sources Accessed:

Data Mining Reveals How Conspiracy Theories Emerge on Facebook” – MIT Technology Review

The Exaggerated Promise of So-Called Unbiased Data Mining” – Gary Smith

Xi’s File Cabinet

Depictions of dystopian governments, from Orwell to Black Mirror, often make an effort to include systems of surveillance in their fiction.  These systems track, catalogue, and manage citizen’s lives, leveraging behavior for the good of the state and curbing dissent at every opportunity.  And, aside from those that wield the levers of power, there is almost no perception of the surveillance state as benevolent. But if we consider the argument put forward by Alex Pentland in “A Data Driven Society”, one that highlights the need for systems that can manage and direct what he calls “idea flow” (Pentland 81), can we begin to construct a “benevolent” surveillance state?

The People’s Republic of China, although a far cry from Orwell’s Oceania, has a history of surveying and policing its citizens behavior.  Dating back to as far as the 11th century, more recent surveillance ventures began after the 1949 takeover by the Chinese Communist Party.  These programs were aimed at monitoring movement and loyalty to the state and placed citizens at odds with one another (Mistreanu 4). In 2014, the State Council announced the next step: a “national credit system” that would “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step” (Planning Outline).  Intended on being fully implemented by 2020, this announcement sent many Western journalists and human rights advocates spinning.

And while they’re concerns are justified, the CCP has a history or punishing dissent, the aim of the social credit system is, at least on paper, quite benevolent.  The aim is promoting good behavior through tangible rewards given to citizens who perform certain behaviors. Citizens of Rongcheng, one of three cities part of the CCP’s pilot program, are given 1,000 points to begin with, a number that changes depending on recorded behavior.  From there, five points can be taken away for a traffic ticket, exemplary business or commitment to family can earn someone 30 points, and even charity work is logged and rewarded (Mistreanu 2). All private citizens, and businesses, are beholden to this scoring system (Mistreanu 2).

What’s fascinating about this system is that it is not carried out by the central government alone.  In fact, there is no “national” social credit system in any sense. Instead, the social credit system being implemented across China is a decentralized patchwork of credit values, each with their own scoring factors, operating out of online payment providers like AliPay, city governments, hospitals, and even libraries (Mistreanu 4).

This effort illustrates the potential reality of using data at local, regional, and national levels to “nudge” behavior in beneficial ways.  With the amount of data swimming around in the aether, it only makes sense that it be used as leverage to construct a better society. But there are issues.  The Chinese social credit system is not a radical change. In fact, one could argue it is simply a way for the CCP to solidify their position and increase their options for suppression.  But imagine different circumstances. Imagine a society leveraging data on the macroscopic level, everything from government records to online receipts, as is being done in China, and a comprehensive list of digital rights for consumer citizens.  The fusion of Pentland’s “Data New Deal” (Pentland 83) and the CCP’s social credit system might be the optimal template for the 21st century template.

Sources Accessed: