Friday, September 14, 2012

Hip Hop Roots & Bebo Valdes


Numbers About My Mother

It’s August and it’s San Francisco so it’s cold. While I’m walking home from work there’s a call from a Portland number I don’t recognize. I answer. It’s a friend of my mother’s, phoning to let me know that my mother has tried to kill herself, that she’s at a hospital in an induced coma. I slump onto a cement car stop in a parking lot and listen to the details, dig in my purse for a pen, turn the phone away from the wind, write down the hospital’s name and the room number, watch people walk down Polk Street on their way home or to happy hour, thinking how normal they all look, how careless they act while my mother is in a coma. Her friend says she’s not sure how bad it is. I try to figure out how to phrase my question correctly, politely: “You mean she might die?” but I can’t think of how it’s supposed to be said, how a person asks this of a near-stranger regarding her own mother, so I don’t ask it.

My mother is 57 and I am 32. This isn’t the first time she has tried to kill herself. The other time was when she was 32 and I was seven. Back then, she was a single mother of four kids — my three brothers and me. She’d been married twice, divorced twice. We lived in a little house that my brothers and I came into and went out of with impunity while she slept days, worked at a bar nights. The house had two bedrooms and one attic. One of my brothers was still a baby, not yet two years old. That’s a lot of numbers for one paragraph. Here are some more:

Number of brothers I didn’t get to see anymore when my mother gave up custody of all of us and we went into foster care: two.

Number of families, total, my brother and I lived with before graduating high school: seven.

Number of years old I was when I re-met my mother: nineteen.

Average number of times my mother and I talk on the phone per week: three. We’re close, like best friends sometimes. We talk about everything, almost. But then. We’ll never be close enough. We don’t talk about the difficult things. We don’t talk about the days when we were a family of five. I don’t ask her what number of times she had to put her signature on what number of lines, what number of forms she had to fill out to let go of all four of her children. One? Five? Twelve? How does that work?

The mother I know now is a very small, mellow person who wears feather earrings and three or four rings on each hand and gauzy scarves and a denim jacket with a big peace sign on the back, and sometimes when we talk she seems very old and wise, and sometimes she seems very young and simple. Her cell phone ring tone is “All You Need Is Love.”

I don’t call her Mom. I don’t remember what it felt like to call her that. I write it in cards, but when we’re together I can’t think of a comfortable way to address her, so I don’t call her anything.

Although. She is a lot of things. I look just like her, and sometimes when I’m leaving a friend a voicemail or giving a stranger directions on the street, I have to stop, startled for a second, because I’m intonating my words in the exact same way she does.

All these years later, and now she's tried it again. It comes as a shock, because I hadn’t thought ... I don’t know what I hadn’t thought. I try to pinpoint it. I’m still sitting on the car stop in the parking lot, and it seems important to decide, before I get up and continue my walk home and call the hospital, why exactly my mother doing this has come as a shock. I come up with: I guess I just thought she was happy. Well, not in an ecstatic-to-be-living-in-the-world sort of way, but in a regular way — she crochets barefoot sandals, she has a garden — that just-enough sort of happy that prevents people from wanting to die. That’s the kind of happiness I had been envisioning in my mother’s life, I guess. Tomato-and-corn-garden happy.

by Melissa Chandler, The Hairpin |  Read more:

Collective Soul Cat


Please Stare

[ed. Wow, Sasha Weiss. What a great piece of writing.]

Entering the big tent at Lincoln Center, where most of the marquee shows at New York City’s fashion week took place, you feel transported to the scene in the “Hunger Games” movie, where the Capitol’s élite gather to observe the presentation of the new tributes, dressed in their metallic and feathered finest. The figures at Lincoln Center are humans, but humans who have imagined themselves into some mirrored universe, where women walk on craggy stilts, lips are colored the darkest crimson, and nearly seven-foot-tall men in hot pants show off their legs. They drift around the lobby, eyeing one another (eyes, too, are dramatically painted here, often in gold). Occasionally, homage is paid to a particularly daring outfit by means of a photograph.

I thought of the “Hunger Games” because that scene (the most visually arresting one in the film) is designed like a satanic fashion show, the runway serving as a conveyor belt for young children compelled to enact the desires of the powerful for beauty and bloodshed. “Hunger Games” isn’t the only pop-cultural artifact that primes us to view fashion as an expression of some inner rot, as vanity, a grasp at wealth, the shallow aspirations of a classist society. Even the shows devoted to its practice, like “Project Runway” and “America’s Next Top Model,” make fashion into a ruthless competition, presided over by stern, frosty judges (mostly women).

I didn’t expect to love the shows as I did, but I found them surprisingly joyous affairs. Watching them, we’re given permission to project ourselves into idealized, adventuresome future lives—ones that involve shimmering, jewel-toned gowns, stiff metallic trench coats, and flowing pants suits screen printed with images of highways—but we’re pulled even more forcefully to imagine our pasts. The fashion show—which begins with all the calculation and jostling of regular life—ends up depositing us somewhere back in the realm of childhood: before our personalities had coalesced, when we encountered ourselves in mirrors, wondered about who we might become, and pretended.

* * *

In New York, people stare at one another all the time, but it’s usually surreptitious: a flickering once-over walking down Spring street, checking out someone’s jeans. At fashion week, looking is the point. The waiting to enter the shows, and then the waiting for them to start, is interminable, and seems designed to stoke the study of others. Massed in a pack that reluctantly forms a line, the fashionistas gather at the entrance to a show, gobbling each other up with their eyes. (I spy, on the way into Nanette Lepore: many Louis Vuitton totes; a hideous crocheted poncho in garish colors layered over a flared leather miniskirt; a man and a woman who look to be in their eighties, both immaculately attired, the woman in black Chanel with leopard shoes.)

When we’re finally allowed to go in, the room itself is like a giant, blinking eye. At the back wall, hundreds of photographers have arranged themselves, nearly on top of one another, on rafters, creating a wall of jutting cameras. Lining the room are rows of benches and the spectators pile in (buyers, journalists, models, and the pure lovers of fashion, who one can spot because of the inventiveness of their outfits. I watched one latecomer navigate through a thicket of legs to reach her seat in four-inch platformed Oxfords, wearing stripes in all directions, to match her hair, which had one streak of white). We’re seated in descending order of importance—the well-known writers, editors, and models in the front row, closest to the catwalk. I’m in the standing room section, the better to survey the room.

A strobe-like flashing somewhere down below indicates the presence of a celebrity surrounded by cameras. I can see the fit silhouette of a woman in a haze of light. Someone near me murmurs that it’s Edie Falco. Even a crowd of that size (five hundred, easily, in the big tent’s main space) quickly becomes a hive, its lines of hierarchy drawn in thick black. The lights are low, with a glow of illumination from the stage and static of voices, and then, as in the theatre, the room turns a shade darker, the talking subsides. There’s a beat of anticipation, and the bright lights snap on.

Hundreds of well-dressed, strategizing people who have spent the last thirty minutes comparing themselves to one another incline their heads and their attention toward the runway. In this moment, they all want the same thing: to watch the beautiful parade.

by Sasha Weiss, New Yorker |  Read more:
Photo: Maria Lokke

Three Reasons to Salute Ben Bernanke

It’s time to give Ben Bernanke some credit. Under attack from the left and right for much of the past year, the mild-mannered former Princeton prof has shown some leadership and pushed through a major policy shift. In committing the Fed to buying tens of billions of dollars worth of mortgage bonds every month until the jobless rate, currently 8.1 per cent, falls markedly, Bernanke and his colleagues on the Fed’s policy-making committee have finally demonstrated that they won’t stand aside as tens of millions of Americans suffer the harsh consequences of a recession that was largely made on Wall Street.

I’ve had my ups and downs with Bernanke, whom I profiled at length back in 2008. At the start of the year, I thought critics were giving him a raw deal. With short-term interest rates close to zero (where they’ve been since December, 2008), and with job growth seemingly picking up, the calls for more Fed action seemed overstated. But over the past six months, as the recovery sputtered and Bernanke dithered, I too, ran out of patience with him. In a column in Fortune last month, I even suggested that Barack Obama should have replaced him when he had the chance, back in 2010.

It turns out that Bernanke was merely biding his time. I still think the Fed should have moved earlier. Once it became clear that slower G.D.P. growth, rather than some statistical aberration, was generating the big falloff in job creation we saw from March onwards, there was no justification for inaction. But Bernanke has now rectified the error—and then some. For at least three reasons, Thursday’s move was a historic one, which merits a loud salute:

1. Bernanke exceeded expectations. For several months now, he has been saying that the Fed would eventually act if the labor market didn’t improve of its own accord. In Jackson Hole last month, at the Fed’s annual policy gathering, he strongly hinted at another round of quantitative easing—the practice of exploiting the Fed’s capacity to create money and making large-scale purchases of bonds, which puts downward pressure on interest rates, which, in turn, spurs spending and job creation—at least in theory.

The Fed has tried this policy twice before, in 2009/10 (QE1) and 2010/11 (QE2). In retrospect, it was a big mistake to abandon QE2 just as the Obama Administration’s fiscal stimulus, which had provided support to the economy from 2009 to 2011, was running down. The experience of Japan demonstrates that in the aftermath of asset-price busts, when households and firms are seeking to pay down their debts, the prolonged maintenance of monetary and fiscal stimulus is necessary to prevent a semi-permanent slump.

Bernanke didn’t publicly concede on Thursday that he had blundered—that would be asking too much. But in announcing the terms of QE3, he went considerably further than most observers had been expecting. The two previous rounds of quantitative easing were term limited: this one isn’t. Rather, its duration will be directly linked to the jobs picture. “(I)f the outlook in the labor market for the labor market does not improve substantially, the Committee will continue its purchases of agency mortgage-backed securities, undertake additional asset purchases, and employ its other tools as appropriate such improvement is achieved…” the Fed said in a statement.

by John Cassidy, New Yorker |  Read more:
Photo: Platon

My Life as a TaskRabbit

Standing in the living room of his luxurious two-bedroom apartment, which has sweeping views of the San Francisco Bay, Curtis Jackson informs me that I am a terrible housecleaner. There are soap stains on the walls of his master bathroom and pools of water gathering near the edges of the tub. My Roomba vacuum, we discover after a lengthy and humiliating search, is out of power and stuck under a bed. There’s an entire room that I didn’t know about and thus never cleaned. I also neglected to take out the trash and left the living room coated in the noxious perfume of an organic cedar disinfectant. “I respect what you are trying to do, and you did an OK job in the time allotted,” he says. “But frankly, stick to being a reporter.”

The apartment is one stop in the middle of my short, backbreaking, soul-draining journey into what Silicon Valley venture capitalists often call the distributed workforce. This is the fancy term for the marketplace for odd jobs hosted by the site TaskRabbit, the get-me-a-soy-latte errands offered by the courier service Postmates, and the car washing assignments aggregated by yet another venture, called Cherry. These companies and several others are in the business of organizing and auctioning tedious and time-consuming chores. Rob Coneybeer, managing director of the investment firm Shasta Ventures, which has backed several of these new companies, says the goal is to build a new kind of labor market “where people end up getting paid more per hour than they would have otherwise and find it easier to do jobs they are good at.”

The idea of posting or finding jobs online isn’t new. Craigslist, the pioneering Internet bulletin board, allowed the primitive, gentle folk of the 1990s to find day work, not to mention cheap dates. These new services are different, partly because they’re focused and carefully supervised, and partly because they take advantage of smartphones. Workers can load one of these companies’ apps on their location-aware iPhone or Android device and, if the impulse strikes, take a job near them any time of day. Employers can monitor the whereabouts of their workers, make payments on their phones or over the Web, and evaluate each job after it’s accomplished. The most capable workers then rise to the top of the heap, attracting more work and higher pay. Lollygaggers who don’t know how to recharge their Roombas fall to the bottom of the barrel.

Distributed workforce entrepreneurs and their investors are thinking big. They compare their startups to fast-growing companies such as Airbnb, which allows people to rent out their homes. In this case, the assets for rent are people’s skills and time. Leah Busque, a former IBM (IBM) software engineer who started and runs TaskRabbit, says thousands of people make a living (up to $60,000 a year) on her site, which operates in San Francisco, Los Angeles, New York, Chicago, and five other cities. “We are enabling micro-entrepreneurs to build their own business on top of TaskRabbit, to set their own schedules, specify how much they want to get paid, say what they are good at, and then incorporate the work into their lifestyle,” she says.

Venture capitalists have bet $38 million on TaskRabbit and millions more on similar startups. Other distributed labor companies, with names like IAmExec (be a part-time gopher) and Gigwalk (run errands for companies) are being founded every day. Listening to this entrepreneurial buzz all summer, I got a notion that I couldn’t shake—that the only way to take the temperature of this hot new labor pool was to jump into it.

by Brad Stone, Bloomberg Businessweek |  Read more:

What Was Really Behind the Benghazi Attack?

Were the attacks on the United States Consulate in Benghazi, which killed the American Ambassador and three other diplomats, motivated by the film that the assailants, and many news networks, claim was their motive? Was it really religious outrage that made a few young men lose their heads and commit murder? Have any of the men who attacked the consulate actually seen the film? I do not know one Libyan who has, despite being in close contact with friends and relatives in Benghazi. And the attack was not preceded by vocal outrage toward the film. Libyan Internet sites and Facebook pages were not suddenly busy with chatter about it.

The film is offensive. It appears that it was made, rather clumsily, with the deliberate intention to offend. And if what happened yesterday was not, as I suspect, motivated by popular outrage, that outrage has now, as it were, caught up with the event. So, some might say, the fact that the attack might have been motivated by different intentions than those stated no longer matters. I don’t think so. It is important to see the incident for what it most likely was.

No specific group claimed responsibility for the attack, which was well orchestrated and involved heavy weapons. It is thought to be the work of the same ultra-religious groups who have perpetrated similar assaults in Benghazi. They are religious, authoritarian groups who justify their actions through very selective, corrupt, and ultimately self-serving interpretations of Islam. Under Qaddafi, they kept quiet. In the early days of the revolution some of them claimed that fighting Qaddafi was un-Islamic and conveniently issued a fatwa demanding full obedience to the ruler. This is Libya’s extreme right. And, while much is still uncertain, Tuesday’s attack appears to have been their attempt to escalate a strategy they have employed ever since the Libyan revolution overthrew Colonel Qaddafi’s dictatorship. They see in these days, in which the new Libya and its young institutions are still fragile, an opportunity to grab power. They want to exploit the impatient resentments of young people in particular in order to disrupt progress and the development of democratic institutions.

Even though they appear to be well funded from abroad and capable of ruthless acts of violence against Libyans and foreigners, these groups have so far failed to gain widespread support. In fact, the opposite: their actions have alienated most Libyans.

Ambassador J. Christopher Stevens was a popular figure in Libya, and nowhere more than in Benghazi. Friends and relatives there tell me that the city is mournful. There have been spontaneous demonstrations denouncing the attack. Popular Libyan Web sites are full of condemnations of those who carried out the assault. And there was a general air of despondency in the city Wednesday night. The streets were not as crowded and bustling as usual. There is a deep and palpable sense that Benghazi, the proud birthplace of the revolution, has failed to protect a highly regarded guest. There is outrage that Tripoli is yet to send government officials to Benghazi to condemn the attacks, instigate the necessary investigations and visit the Libyan members of the consulate staff who were wounded in the attack. There is anger, too, toward the government’s failure to protect hospitals, courtrooms, and other embassies that have recently suffered similar attacks in Benghazi. The city seems to have been left at the mercy of fanatics. And many fear that it will now become isolated. In fact, several American and European delegates and N.G.O. personnel have cancelled trips they had planned to make to Benghazi.

by Hisham Matar, New Yorker |  Read more:
Photograph by Ibrahim Alaguri/AP Photo

The Machines Are Taking Over

[ed. How computerized tutors are learning to teach humans.]

In a 1984 paper that is regarded as a classic of educational psychology, Benjamin Bloom, a professor at the University of Chicago, showed that being tutored is the most effective way to learn, vastly superior to being taught in a classroom. The experiments headed by Bloom randomly assigned fourth-, fifth- and eighth-grade students to classes of about 30 pupils per teacher, or to one-on-one tutoring. Children tutored individually performed two standard deviations better than children who received conventional classroom instruction — a huge difference. (...)

The morning after I watched Tyler Rogers do his homework, I sat in on his math class at Grafton Middle School. As he and his classmates filed into the classroom, I talked with his teacher, Kim Thienpont, who has taught middle school for 10 years. “As teachers, we get all this training in ‘differentiated instruction’ — adapting our teaching to the needs of each student,” she said. “But in a class of 20 students, with a certain amount of material we have to cover each day, how am I really going to do that?”

ASSISTments, Thienpont told me, made this possible, echoing what I heard from another area math teacher, Barbara Delaney, the day before. Delaney teaches sixth-grade math in nearby Bellingham. Each time her students use the computerized tutor to do their homework, the program collects data on how well they’re doing: which problems they got wrong, how many times they used the hint button. The information is automatically collated into a report, which is available to Delaney on her own computer before the next morning’s class. (Reports on individual students can be accessed by their parents.) “With ASSISTments, I know none of my students are falling through the cracks,” Delaney told me.

After completing a few warm-up problems on their school’s iPod Touches­, the students turned to the front of the room, where Thienpont projected a spreadsheet of the previous night’s homework. Like stock traders going over the day’s returns, the students scanned the data, comparing their own grades with the class average and picking out the problems that gave their classmates trouble. (“If you got a question wrong, but a lot of other people got it wrong, too, you don’t feel so bad,” Tyler explained.)

Thienpont began by going over “common wrong answers” — incorrect solutions that many students arrived at by following predictable but mistaken lines of reasoning. Or perhaps, not so predictable. “Sometimes I’m flabbergasted by the thing all the students get wrong,” Thienpont said. “It’s often a mistake I never would have expected.” Human teachers and tutors are susceptible to what cognitive scientists call the “expert blind spot” — once we’ve mastered a body of knowledge, it’s hard to imagine what novices don’t know — but computers have no such mental block. Highlighting “common wrong answers” allows Thienpont to address shared misconceptions without putting any one student on the spot.

I saw another unexpected effect of computerized tutoring in Delaney’s Bellingham classroom. After explaining how to solve a problem that many got wrong on the previous night’s homework, Delaney asked her students to come up with a hint for the next year’s class. Students called out suggested clues, and after a few tries, they arrived at a concise tip. “Congratulations!” she said. “You’ve just helped next year’s sixth graders learn math.” When Delaney’s future pupils press the hint button in ASSISTments, the former students’ advice will appear.

Unlike the proprietary software sold by Carnegie Learning, or by education-technology giants like Pearson, ASSISTments was designed to be modified by teachers and students, in a process Heffernan likens to the crowd-sourcing that created Wikipedia. His latest inspiration is to add a button to each page of ASSISTments that will allow students to access a Web page where they can get more information about, say, a relevant math concept. Heffernan and his W.P.I. colleagues are now developing a system of vetting and ranking the thousands of math-related sites on the Internet.

by Annie Murphy Paul, NY Times |  Read more:
Illustration by Tim Enthoven

Thursday, September 13, 2012

Healthcare's "Massive Transfer of Wealth"

[ed. It really is a massive transfer of wealth to health insurers and health care providers. The end years are expensive - no matter how much you think you've saved to sustain some measure of financial security, you never know if it will be enough. Then, there's the added indignity of having essentially zero control over when, or how, you exit this life. There has to be a better way.]

Here are excerpts from one family's story about the financial aspects of end-of-life-related healthcare:
My aunt, aged 94, died last week. In and of itself, there is nothing remarkable in this statement, except for the fact that she died a pauper and on medical assistance as a ward of the state of Minnesota... 
My aunt and her husband, who died in 1985, were hardworking Americans. The children of Polish immigrants, they tried to live by the rules. Combined, they worked for a total of 80 years in a variety of low-level, white-collar jobs. If they collectively earned $30,000 in any given year, that would have been a lot. 
Yet, somehow, my aunt managed to save more than $250,000. She also received small pensions from the Teamsters Union and the state of California, along with Social Security and a tiny private annuity. In the last decade of her life, her monthly income amounted to about $1,500.. 
But when she fell ill and had to be placed in assisted living, and finally in a nursing home, her financial fate was sealed. Although she had Medicare and Medicare supplemental insurance, neither of these covered the costs of long-term care. Her savings were now at risk, at a rate of $60,000 a year... 
In the end, she spent everything she had to qualify for Medicaid in Minnesota, which she was on for the last year of her life. This diligent, responsible American woman was pauperized simply because she had the indecency to get terminally ill... 
Though I have not been able to find statistics on the subject, I am certain that there will be a massive transfer of wealth over the next two or three decades, amounting to hundreds of billions of dollars or more, from people just like my aunt to health insurers and health care providers... 
This week, I was about to close out her checking account in the amount of $215, the sum total of her wealth. But I received, in the mail, a bill from a heath care provider in the amount of $220. Neither Medicare nor her supplemental insurer will pay it, because it is an unspecified "service not covered."More details of the story at the StarTribune. Of course, it's just one family's story. Repeated hundreds of thousands of times across the country.
My own mother, age 94, has asked me, "when the time comes" to "put her down."

by Minnasotastan, TYWKIWDBI |  Read more:

© Chris Ware/The New Yorker

Tyranny of Merit


The ideal of meritocracy has deep roots in this country. Jefferson dreamed of a “natural aristocracy.” But the modern meritocracy dates only to the 1930s, when Harvard President James Bryant Conant directed his admissions staff to find a measure of ability to supplement the old boys’ network. They settled on the exam we know as the SAT.

In the decades following World War II, standardized testing replaced the gentleman’s agreements that had governed the Ivy League. First Harvard, then Yale and the rest filled with the sons and eventually daughters of Jews, blue-collar workers, and other groups whose numbers had previously been limited.

After graduation, these newly pedigreed men and women flocked to New York and Washington. There, they took jobs once filled by products of New England boarding schools. One example is Lloyd Blankfein, the Bronx-born son of a Jewish postal clerk, who followed Harvard College and Harvard Law School with a job at a white-shoe law firm, which he left to join Goldman Sachs.

Hayes applauds the replacement of the WASP ascendancy with a more diverse cohort. The core of his book, however, argues that the principle on which they rose inevitably undermines itself.

The argument begins with the observation that meritocracy does not oppose unequal social and economic outcomes. Rather, it tries to justify inequality by offering greater rewards to the talented and hardworking.

The problem is that the effort presumes that everyone has the same chance to compete under the same rules. That may be true at the outset. But equality of opportunity tends to be subverted by the inequality of outcome that meritocracy legitimizes. In short, according to Hayes, “those who are able to climb up the ladder will find ways to pull it up after them, or to selectively lower it down to allow their friends, allies and kin to scramble up. In other words: ‘whoever says meritocracy says oligarchy.’”

With a nod to the early 20th-century German sociologist Robert Michels, Hayes calls this paradox the “Iron Law of Meritocracy.” (...)

Hayes oversells his argument as a unified explanation of the “fail decade.” Although it elucidates some aspects of the Iraq War, Katrina debacle, and financial crisis, these disasters had other causes. Nevertheless, the Iron Law of Meritocracy shows why our elites take the form they do and how they fell so out touch with reality. In Hayes’s account, the modern elite is caught in a feedback loop that makes it less and less open and more and more isolated from the rest of the country.

What’s to be done? One answer is to rescue meritocracy by providing the poor and middle class with the resources to compete. A popular strategy focuses on education reform. If schools were better, the argument goes, poor kids could compete on an equal footing for entry into the elite. The attempt to rescue meritocracy by fixing education has become a bipartisan consensus, reflected in Bush’s “No Child Left Behind” and Obama’s “Race to the Top.”

Hayes rejects this option. The defect of meritocracy, in his view, is not the inequality of opportunity that it conceals, but the inequality of outcome that it celebrates. In other words, the problem is not that the son of a postal clerk has less chance to become a Wall Street titan than he used to. It’s that the rewards of a career on Wall Street have become so disproportionate to the rewards of the traditional professions, let alone those available to a humble civil servant.

by Samuel Goldman, The American Conservative |  Read more:
Illustration by Michael Hogue

How Do Our Brains Process Music?

I listen to music only at very specific times. When I go out to hear it live, most obviously. When I’m cooking or doing the dishes I put on music, and sometimes other people are present. When I’m jogging or cycling to and from work down New York’s West Side Highway bike path, or if I’m in a rented car on the rare occasions I have to drive somewhere, I listen alone. And when I’m writing and recording music, I listen to what I’m working on. But that’s it.

I find music somewhat intrusive in restaurants or bars. Maybe due to my involvement with it, I feel I have to either listen intently or tune it out. Mostly I tune it out; I often don’t even notice if a Talking Heads song is playing in most public places. Sadly, most music then becomes (for me) an annoying sonic layer that just adds to the background noise.

As music becomes less of a thing—a cylinder, a cassette, a disc—and more ephemeral, perhaps we will start to assign an increasing value to live performances again. After years of hoarding LPs and CDs, I have to admit I’m now getting rid of them. I occasionally pop a CD into a player, but I’ve pretty much completely converted to listening to MP3s either on my computer or, gulp, my phone! For me, music is becoming dematerialized, a state that is more truthful to its nature, I suspect. Technology has brought us full circle.

I go to at least one live performance a week, sometimes with friends, sometimes alone. There are other people there. Often there is beer, too. After more than a hundred years of technological innovation, the digitization of music has inadvertently had the effect of emphasizing its social function. Not only do we still give friends copies of music that excites us, but increasingly we have come to value the social aspect of a live performance more than we used to. Music technology in some ways appears to have been on a trajectory in which the end result is that it will destroy and devalue itself. It will succeed completely when it self-destructs. The technology is useful and convenient, but it has, in the end, reduced its own value and increased the value of the things it has never been able to capture or reproduce.

Technology has altered the way music sounds, how it’s composed and how we experience it. It has also flooded the world with music. The world is awash with (mostly) recorded sounds. We used to have to pay for music or make it ourselves; playing, hearing and experiencing it was exceptional, a rare and special experience. Now hearing it is ubiquitous, and silence is the rarity that we pay for and savor.

Does our enjoyment of music—our ability to find a sequence of sounds emotionally affecting—have some neurological basis? From an evolutionary standpoint, does enjoying music provide any advantage? Is music of any truly practical use, or is it simply baggage that got carried along as we evolved other more obviously useful adaptations? Paleontologist Stephen Jay Gould and biologist Richard Lewontin wrote a paper in 1979 claiming that some of our skills and abilities might be like spandrels—the architectural negative spaces above the curve of the arches of buildings—details that weren’t originally designed as autonomous entities, but that came into being as a result of other, more practical elements around them.

by David Byrne, Smithsonian | Read more:
Photo: Clayton Cubitt

Melody Gardot



Virginia Colback, “Yellow and Grey Abstract”, oil and cement on canvas