If Robots Don’t Cause Mass Unemployment, Will Many Remaining Jobs Pay $2/Hr?

Maybe we’ll get lucky, and robots/AI won’t create mass unemployment. But of the remaining jobs, how many will be good jobs? A recent study of Amazon’s Mechanical Turk, “one of the largest micro-crowdsourcing markets that is widely used by industry,” raises a red flag.

The research found that on average requesters paid $11/hr. But when you factor in the considerable time workers spend on unpaid labor such as searching for tasks, the actual median wage drops to just $2/hr. Of the sample of work they analyzed, only 4% earned more than $7.25/hr.

According to researchers Mary Gray and Siddharth Suri, who last year worked with Pew on a fascinating study of “on-demand work,” this type of work is going to play an increasingly important role in the US economy:

Labor economists Lawrence Katz and Al Krueger estimate that conventional temp and alternative contract-driven work rose from 10 to 16%, accounting for all net employment growth in the US economy in the past decade. Assuming Pew’s trends continue at the current rate, by the year 2027, nearly 1 in 3 American adults will transition to online platforms to support themselves with on-demand gig work.

This is the other reason why we should stop debating whether robots will destroy more jobs than they create. Even if they don’t, we could still end up living in a dystopian future. It’s time to stop obsessing over questions we can’t answer and start obsessing over how to build a future that’s worth living in.

In 2018, Let’s Stop Debating If Robots Will Take Our Jobs

Here’s my New Year’s Resolution for all of us: let’s stop arguing whether robots/AI will take our jobs.

It’s not that the answer doesn’t matter — it’s absolutely critical. But short of asking a Terminator that comes back in time, we have no way to what the answer’s going to be. It doesn’t matter how many Smart Girls & Boys crank out fancy reports filled with numbers they painstakingly crunched. There are just too many unknowns.

What convinced me that this argument is a waste of time were the extraordinary victories this year by AlphaGo and Open AI. Four years ago, if you’d told me that an AI could teach itself to play Go — which computers weren’t suppose to master for at least a decade — better than almost all human players in just 3 days, I’d have thought you’d been smoking something. I’m not saying AI is going to advance much faster than expected. But there’s simply no way to know.

Does that mean we should stop worrying? Not at all. Regardless of whether robots/AI create mass unemployment, they are going to have a deep, profound impact on the economy — and create a future that either benefits everyone or mostly those at the top. We could easily end up with plenty of jobs that pay terribly & that lay the groundwork for someone much scarier than Trump. And from Harlem to Harlan County, from East LA to the midWest Rust Belt we need to take advantage of these massive technological changes to make whole these communities that our society abandoned.

In short, in 2018 let’s stop obsessing over predicting the future and start obsessing on how to create a more just, prosperous economy for all regardless of how many jobs robots/AI destroy.

Wild Rats and “Evidence-based” Programming Language Design [Very Geeky]

If we want to create a world in which in every community as many people as possible can participate in creating robotics, AI, virtual and augmented reality, etc., we’re going to have to “disrupt” the world of programming languages so coding is easier for adults. As part of a pilot project to try to do that — Mixed Reality for All — I’ve started reading research on “evidence-based” programming language design. I’m just getting started, but so far much of the research reminds me of a story my grandfather Sandy used to tell.

In the 1950s, Sandy spent some time designing rat mazes for some university psychologists. The rats they used were carefully controlled; you literally ordered them out of a catalog based on the characteristics you wanted. One day, the lab techs put a new batch of rats in a rat maze, went away for a bit, then came back to see how the rats were progressing. They couldn’t find them. After a little investigation, they figured out what had happened. They’d accidentally been sent a batch of wild rats. Rather than running the maze like genetically bred docile rats did, the wild rats had found a place in the maze that provided some visual cover, chewed a hole in the floor of the maze, and escaped!

Sandy’s point was that the psychologists’ work on “intelligence” assumed that they were working with animals that weren’t that intelligent. If you worked with truly intelligent animals who had to cope with the complexities of the real world, it was hard to get them to focus on the very small corner of intelligence that you want to study. Like a prisoner of war, the first duty of a wild animal being held in a lab is to escape.

A lot of the research on the best way to design a programming language seems to be like it falls into the same trap: it focuses on one small corner of design while not paying enough attention to the larger context.

For example, there’s a programming language called Quorum that was designed from the ground up to be driven by the best evidence we have about what makes a good programming language. Their website has a great page that summarizes a lot of the best evidence. The first example: whether programming languages should be statically or dynamically typed (WARNING: this is pretty geeky).

Some programming languages, such as Java, use “static typing”: whenever you are going to store info in a variable, you have to tell the computer what type of data you’re going to put in that variable.

firstName: String;
firstName = "Beyoncé";
numberOfAlbums: Int;
numberOfAlbums = 6;

print firstName + " has sold " + string(numberOfAlbums) + " albums";

Other programming languages, such as Python, use what’s called “dynamic typing,” where one of the advantages is that you don’t have to spell out what type of data you’re using, the computer figures it out:

firstName = "Beyoncé"
numberOfAlbums = 6

print firstName + " has sold " + numberOfAlbums + " albums"

There are pros and cons for each approach, and more than a few coders like to trash each other for their opinions about it. Guess what, say the evidence-based coding language researchers. We don’t need no stinkin’ opinions, we’ve got data: beginners make a lot fewer mistakes with static typing. Game over!

I read one of the papers making that case. It was a smart paper. Essentially what they showed was that if you give programmers one of 2 options:

function taskCollision(player: User, opponent: Enemy, engine: GameEngine) // Static Typing
function taskCollision(player, opponent, engine)             // Dynamic Typing

More programmers make fewer mistakes with the statically typed code.

Makes sense to me. In their example, static typing gives me more info about what’s going on, so I’m less likely to get stuck.

But in the real world, it’s not so cut and dried.

Take one of my favorite new editors for writing code: Visual Studio Code. One of the things I like about it is that you can customize it, including adding new features. To do so, you use Type Script, a version of JavaScript that has Static Typing. In theory, Static Typing should give me a more info. But in practice? Here’s some sample code from one of their tutorials:

export function activate(context: ExtensionContext) {

...

}
...

class WordCounterController {
private _wordCounter: WordCounter;
private _disposable: Disposable;
constructor(wordCounter: WordCounter) {
this._wordCounter = wordCounter;

// subscribe to selection change and editor activation events
let subscriptions: Disposable[] = [];

When you first see this code, it looks like gibberish. The fact that the static typing tells you one variable is an ExtensionContext and another one is a Disposable doesn’t help much.

In my experience, many systems that use static typing have the same problem: it takes a lot of lines of code to do just about anything, and a lot of the “types” have strange names. It’s almost as if languages that use static typing also tend to encourage coders to be more verbose and use (somewhat) geekier names. In contrast, one of the reasons I like working in the programming language Python, which is dynamically typed, is that it encourages people to write code that’s more concise and more intelligible (although you can create a hot mess in any language).

An experiment in a lab can tell you that in isolation, one feature/strategy is more effective than another. But that’s not all that useful. What you really need is data that tells you what’s most effective when you’re dealing with all the complexities and interdependencies of code out in the world.

And that brings me to the programming language Quorum, whose website has the synopsis of evidence-based programming design. If it’s the most “evidence-oriented” programming language around, it must be pretty hot stuff, right? But from what I can tell from googling and from checking out the rest of their website, nobody seems to be using it for solving real world problems.

Don’t get me wrong. A lot of great work was put into it, and some of the tutorials, including how to create a Frogger-like game, are fun and show a lot of thought (although it requires way too many lines of code). But as far as I can tell, the only people who are seriously into Quorum are high school teachers.

If your only goal is to make it easier to teach kids the basic principles of programming, that’s fine. But if you read most of the articles about evidence-based programming design, they’re hunting bigger game. There’s definitely an undertone to a fair number of evidence-based articles that most people who design coding languages don’t have their act together. What’s wrong with these jokers that they aren’t bothering to look at all this great research?

To which I’d reply: if Quorum is so awesome, why isn’t anyone using it for real work? Most of the best coders I’ve known love learning new programming languages. Many believe if you want to be at the top of your game you should learn a new language every couple of years, if only so you can pick up on some of the latest design concepts that went into them. If anything, these folks can go overboard, forever searching for the Holy Grail of coding languages. If Quorum was all that, at least some of those really skilled folks would be using it and evangelizing for it. But the data is pretty clear: for all its prowess in the lab, Quorum doesn’t seem to have gained traction in the real world.

I’m not saying we shouldn’t pay attention to research on programming design, some of which is quite thought-provoking. But after my first pass through the literature, it feels like they need to seriously up their game. They need, in short, to start learning from the “wild rats” of coding.

What Agricultural Extension Services’ History Can Teach Us about Democratizing Technology

Could millions of American workers could take full advantage of the emerging tech that’s going to dominate our economy in the next 20 years? That might sound crazy to you. But this isn’t the first time we’ve had to figure out how to make tech accessible to a daunting number of people. US agriculture went through an extraordinary transformation from the mid-19th to the early 20th century, producing one of the biggest explosions in productivity that the world had ever seen. To make it happen, farmers throughout the US had to adopt and master one wave of technological & process change after another. And that couldn’t have happened without several institutions, culminating in Agricultural Extension Services. To bring about the same kind of radical transformation, we can learn a thing or two from their experience.

Agriculture Extension Services were the third wave of institutions intended to bring the agricultural technological revolution of automation and mechanization to millions of American farmers. The 1862 Morrill Act allowed states to use the funds from selling public lands to establish “land-grant colleges” to teach agricultural and mechanical arts. Although a good start, not enough farmers could afford to take the time to go to college. So in 1877, the Hatch Act gave states the ability to create agricultural experimentation stations to conduct agricultural research and share that knowledge with students and farmers. But the knowledge still wasn’t reaching enough people. So in 1914 the Smith-Lever Act helped land-grant colleges and the USDA cooperate to make sure the knowledge/tech coming out of experimental research stations got into the hands of farmers.

There are 3 lessons we can learn from the successes of Agricultural Extension Services:

1) Scale
Agricultural Extension Services operated on a scale that’s hard to fathom today. Extension services placed at least one staff member in virtually every county in the US. That meant that they could reach farmers where they were, year-round. They often help their communities create clubs and a whole range of social groups, gatherings, etc. that provided multiple opportunities for men, women, and children to learn about the latest agricultural tech in a safe environment, surrounded by their friends. If you had questions, it was easy to get answers — and good extension services staff made sure to build relationships throughout the community so asking questions wasn’t intimidating. Given that robotics, AI, augmented and virtual reality, digital fabrication, etc. are going to become as central to our economy as agriculture was in the past, there’s no reason we couldn’t provide support on a similar scale.

2) Bottom up Feedback Loops
Extension Services staff spent a lot of time thinking about how to break down the latest research in the fields of biology, chemistry, soil science, etc. so it was accessible to farmers who usually had at best a high school education. But they also served as a feedback loop. Agriculture experimental stations wouldn’t do much good if they didn’t understand the problems farmers were currently facing. Extension services staff played a critical role in getting that information and understanding research stations and colleges and helping ensure it would shape the next round of research.

3) Fostering Civic Engagement
Finally, many Extension Service agents used their work to push for an approach that went beyond just training farmers in technical skills. They understood that for farming to thrive, conversations couldn’t stop at the right technique for planting seeds; they needed to include civic problems such as soil erosion and the structure of commodity prices.

Take the example of Louisiana State University’s Mary Mims, who I discussed in a previous post. Mims, who had a national reputation of one of the best speakers of our era, advocated for a vision of what she called the “community organizing method,” which scholar and organizer Harry C Boyte argues was grounded in building community power.

Mims, like others in cooperative extension (home economics, 4-H and other areas) drew on the Jane Addams Hull House tradition. She was also inspired by folk schools in Denmark. These had a focus on agency, building the civic power of students, families, and larger communities. They were “schools for life,” grounded in the experiences and life of common people not elites, with parallels to the “New School” (Escuela Nueva) movement in Latin America, begun in Columbia, which we’ve discussed before….

In Mims view, professionals of any kind should be a “leaven” for community self-organization. “So-called ‘social workers’ cannot hammer a community into shape,” she argued in her book, The Awakening Community. “If a community grows, it must do so from the inside.”

And Mims wasn’t alone. Boyte notes that

In the US, the United States Department of Agriculture and land grant colleges from 1937 to 1942 involved more than three million people in rural America in community discussions about the future of rural life, taking up issues that ranged from commodity prices and soil erosion to the future of democracy in America.

Agricultural Extension Services had plenty of problems, and there’s now extensive research on the ways in which they often helped reproduce racial, gender, and other inequalities. But there are still a lot of valuable lessons to be learned. If we could bring about these kind of remarkable, far thinking changes in late 19th-century and early 20th century agriculture, there’s no reason we can’t do the same for robots and other emerging technology today.

The Digital Music Economy Is Still A Hot Mess

Music reporter Cherie Hu was recently invited to attend the annual Roundtable Conference, an international “music industry symposium focused on copyright.” What she learned is that the industry is basically stuck.

After suffering from shock and awe in the Napster era — in no small part because most powerful players the music industry either had their heads in the sand or were treating the customers like the enemy — by the 2010s, as streaming took off, Roundtable participants seemed pretty upbeat and hopeful about developing solutions that were sustainable. But by this year, they “[seemed] to revert back towards the pessimism and stagnation of a decade prior.” Why?

A music industry consultant at the Roundtable suggested that streaming’s initial, rosy promise to rights holders has since been overshadowed by disillusionment around messy data and the compensation challenges that result. “We’re still only at the beginning of the innovation bell curve,” she said. “It took until 2015 — 15 years! — for labels to admit that on-demand streaming was here to stay. Then came 2016, which was an ‘oh shit’ moment when labels realized that they weren’t prepared to process and analyze the vast swaths of data that were coming in, nor were they equipped to make sure the money was flowing properly through the pipes.”

And so the industry began to veer back towards a Game of Thrones approach to their common problem.

As a result, a behemoth, fragmented rights management landscape has cemented itself even further into the industry’s core…. Interestingly, the majority of Roundtable participants agreed that a global rights database — the long-glorified concept of a single, authoritative registry that would reduce ambiguity around ownership and revenue splits across organizations — was no longer a feasible solution for the music business, both technologically and politically. Many participants argued not only that non-disclosure agreements essentially force labels and publishers to resist creating a public database, but also that unattributable royalties (commonly referred to as “black-box money”) gets distributed on market share if unclaimed after a few years, which benefits bigger labels and actually gives them incentives to provide bad data.

In fact, there has been a marked attitude change in the industry overall around data quality, such that umbrella organizations are beginning to compete, rather than cooperate, on accuracy and trust. The RIAA and NMPA have banded against the ASCAP-BMI duo on building their own separate databases, presumably more for strategic advantage than for the betterment of the wider music landscape….

Many Roundtable participants also pointed to how the pessimism in the music industry comes not just from the lack of good data and surrounding incentives, but also around the culpability of streaming services in devaluing music and making the consumption experience less personal, particularly for DIY and niche communities. An executive from a European PRO, calling himself a “student of history,” recalled that the music industry’s solution for digging itself out of its historic recession in the 1920s was to cater to niche genres. “Now, I’m afraid we’ve found ourselves at the tail end of a similar financial situation, but have the opposite solution of going towards safer, mass-market music, which has the effect of making the average music listener more indifferent,” he said. “We’ve gone from curating music that people really love to curating music that people simply don’t hate.”

It’s not that there aren’t any attempts to create a more healthy solution. For example, Hu mentions the Open Music Initiative, “a historic, top-level initiative bringing together over 140 member organizations, from major labels and publishers to early-stage startups, dedicated to rights management reform.” But efforts like these were not discussed in the Roundtable conversations, and most participants seemed unaware of them. All in all, pretty sad state of affairs.