Saturday, March 16, 2013

Bar Examined

Steven J. Harper has been blessed with notably good timing.

Born smack in the middle of the Baby Boom, the Minneapolis native excelled in school, collected bachelor’s and master’s degrees in economics from Northwestern University, then headed straight to Harvard Law School, where he was a classmate of future U.S. Supreme Court Chief Justice John Roberts.
Neither of Harper’s parents—a truck driver dad and a stay-at-home mom—got past high school. A working-class kid, Harper financed his future with student loans. By the time he graduated Harvard Law, magna cum laude, in 1979, the total debt he had incurred for his three degrees from Northwestern and Harvard came to about $16,000.

It paid off. Harper went straight from Harvard Law to Kirkland & Ellis in Chicago, where he had been a summer associate. His starting salary was $25,000. He flourished, and was mentored and trained as a litigator by sage elder partners who invested their time and energy in the new blood brought into the old-even-then firm. This was the start of the Reagan era, and the U.S. economy began to boom after a long malaise. The future for the bright young corporate litigator was sweet.

“I led what anyone would call a charmed life in the law,” Harper writes in his new book, The Lawyer Bubble: A Profession in Crisis. “Then, as now, most people assumed that the legal profession offered financial security and a way to climb out of the lower or middle class. Career satisfaction, upward mobility, social status, financial security—who could ask for more?”

Harper spent his entire career at Kirkland & Ellis, made equity partner by the time he was thirty-four, and did so well financially that he was able to retire from the practice of law in 2010, when he was fifty-three years old. He now writes books. His latest seeks to warn bright young sons and daughters of midwestern truck drivers that they best not try to climb that ladder that served him so well.

Some of the rungs are broken, others greased and impossible slippery, and that ladder doesn’t stretch to any place you would want to be, really. The era of law being the safe and well-compensated “traditional default option for students with no idea what to do with their lives,” Harper notes, is over, even if the tens of thousands who still flood into law school each year stubbornly believe that these macro forces will somehow not apply to them.

by Elizabeth Lesly Stevens, Washington Monthly | Read more:
Image: uncredited

De Nimes: The Long Journey of Blue Jeans

Before we had low-rise, straight-leg, skinny, selvage, stretchy, resin-coated, lotion-infused, or mom jeans, there was simply jean—the fabric. The name likely originated from gênes, referring to Genoa, Italy, where sailors wore a twill blend of cotton, linen, and wool that came in a variety of stripes and colors.

Today’s jeans are made from heavier, all-cotton denim woven in a combination of indigo-dyed vertical yarn and natural horizontal yarn, resulting in the fabric’s white-speckled surface and pale underside. And although the original name for denim came from Nîmes, France—as in, de Nîmes—the fabric was most likely first produced in England.

Once the United States emancipated itself from British rule, the former colonists stopped importing European denim and began producing it themselves from all-American cotton, picked by slaves in the South and spun, dyed, and woven in the North. The Industrial Revolution was largely fueled by the textile trade, which almost singlehandedly upheld slavery. When the cotton gin mechanized processing in 1793, prices, already subsidized by slave labor, dropped dramatically. Cheap goods drove demand, and a vicious cycle ensued. In the period between the invention of the cotton gin and the Civil War, America’s slave population shot from 700,000 to a staggering 4 million.

After the Civil War, companies like Carhartt, Eloesser-Heynemann, and OshKosh slung cotton coveralls to miners, railroad men, and factory workers. A Bavarian immigrant named Levi Strauss set up shop in San Francisco selling fabric and work-wear. Jacob Davis, an entrepreneurial Reno tailor, bought Strauss’s denim to make workingman’s pants, and added metal rivets to prevent the seams from ripping open. Davis sent two samples of his riveted pants to Strauss, and they patented the innovation together. Soon after, Davis joined Strauss in San Francisco to oversee production in a new factory. In 1890, Strauss assigned the ID number of 501 to their riveted denim “waist overalls.” The Levi’s 501 blue jean—which would become the best-selling garment in human history—was born.

Initially, jeans were proletarian western work-wear, but wealthy easterners inevitably ventured out in search of rugged cowboy authenticity. In 1928, a Vogue writer returned East from a Wyoming dude ranch with a snapshot of herself, “impossibly attired in blue jeans… and a smile that couldn’t be found on all Manhattan Island.” In June 1935, the magazine ran an article titled “Dude Dressing,” possibly one of the first fashion pieces to instruct readers in the art of DIY denim distressing: “What she does is to hurry down to the ranch store and ask for a pair of blue jeans, which she secretly floats the ensuing night in a bathtub of water—the oftener a pair of jeans is laundered, the higher its value, especially if it shrinks to the ‘high-water’ mark. Another innovation—and a most recent one, if I may judge—also goes on in the dead of night, and undoubtedly behind locked doors—an intentional rip here and there in the back of the jeans.” (...)

And now, in the midst of the Great Recession, we have come full circle, with the fairly recent demand for nostalgic “heritage” jeans that recall the hardscrabble industrialism of the Great Depression: work shirts and overalls faded to shades of cornflower blue, and rough-hewn, no-nonsense, deep-dyed dungarees. Like their precursors from the 20s and 30s, these jeans seem imbued with a sad nostalgia for a bygone country (but maybe this time with a better fit). We’ve entered the Dorothea Lange era of fashion—clothed in flecked wool cardigans, formidable flannel shirts, and sturdy work boots, Depression-era from head to toe.

by Jenni Avins, Vice |  Read more:
Photo courtesy of Advertising Archives

Kerry Skarbakka - The Struggle to Right Oneself (2011)
via:

Reposts


[ed. Slow news day so far, so here are some reposts from March and April, 2011. Check out the archives.]

Sex, Hastily, Then Beignets
It's Not a Secret
Tako Poke (Octopus Salad)
Shoot
The Sacred Child
Fancy Meatloaf
Good Thing
Sniffing Out a Menace
Better Handy Than Handsome

Does the Pope Matter?

The next pope should be increasingly irrelevant, like the last two. The farther he floats up, away from the real religious life of Catholics, the more he will confirm his historical status as a monarch in a time when monarchs are no longer believable. Some people think it a new or even shocking thing that so many Catholics pay no attention to papal fulminations—against, for instance, female contraceptives, male vasectomies, condoms to prevent the spread of AIDS, women’s equality, gay rights, divorce, masturbation, and artificial insemination (because it involves masturbation). But it is the idea of truth descending though a narrow conduit, straight from God to the pope, that is a historical invention.

When Cardinal Ratzinger was asked, before he became Pope Benedict XVI, if he was disturbed that many Catholics ignored papal teaching, he said he was not, since “truth is not determined by a majority vote.” But that is precisely how the major doctrines like those on the Trinity, the Incarnation, the Resurrection were fixed in creeds: at councils like that of Nicaea, by the votes of hundreds of bishops, themselves chosen by the people, before popes had any monopoly on authority. Belief then rose up from the People of God, and was not pronounced by a single oracle. John Henry Newman, in On Consulting the Faithful in Matters of Doctrine (1859), argued that there had been periods when the body of believers had been truer to the faith than had the Church hierarchy. He was silenced for saying it, but his historical arguments were not refuted.

Catholics have had many bad popes whose teachings or acts they could or should ignore or defy. Orcagna painted one of them in hell; Dante assigned three to his Inferno; Lord Acton assured Prime Minister William Gladstone that Pius IX’s condemnation of democracy was not as bad as the papal massacres of Huguenots, which showed that “people could be very good Catholics and yet do without Rome”; and John Henry Newman hoped Pius IX would die during the first Vatican Council, before he could do more harm. Acton’s famous saying, “Power tends to corrupt, and absolute power corrupts absolutely,” was written to describe Renaissance popes.

With the election of a new pope, the press will repeat old myths—that Christ made Peter the first pope, and that there has been an “apostolic succession” of popes from his time. Scholars, including great Catholic ones like Raymond Brown and Joseph Fitzmyer, have long known that Peter was no pope. He was not even a priest or a bishop—offices that did not exist in the first century. And there is no apostolic succession, just the twists and tangles of interrupted, multiple, and contested office holders. It is a rope of sand. At the beginning of the fifteenth century, for instance, there were three popes, none of whom would resign. A new council had to be called to start all over. It appointed Martin V, on condition that he call frequent councils—a condition he evaded after he was in power.

by Garry Wills, NY Review of Books |  Read more:
Image: Peter Paul Rubens: Saint Peter

Friday, March 15, 2013


bouquet garni
via:

Food Truck Economics

Bobby Hossain’s day starts early. Along with his family, he runs a food truck called Phat Thai that serves his mother’s Thai recipes “with a modern twist.” Although he won’t be serving customers for nearly 4 hours, he wakes up by 7:30am. He is working a double shift in the truck (lunch and dinner), so his brother is on prep duty. Bobby buys any last minute supplies they need - ice, more bean sprouts - from Restaurant Depot while his brother cuts vegetables and slices meat at the kitchen space they use in a friend’s restaurant. His brother then drives the truck to their parents’ house. They load up and Bobby is on the road at 9:30. 

From 11am-2pm they work at Mission Dispatch - a location in San Francisco’s Mission district that hosts food trucks. It brings in a dependable lunch crowd. Bobby’s mother cooks, his employee Frank takes orders, and Bobby hands out completed orders while helping the other two. After three hours, Phat Thai has served around 200 dishes.

Once the lunch crowd dies down, they return to the commissary, a space where they can clean dishes and dispose of garbage. Bobby checks whether he needs to get more supplies for tomorrow, preps, and then drives the truck to North Beach. From 5pm-8pm they will sell Thai dishes alongside other food trucks at a “market” of food trucks organized by Off The Grid. On busy days, they won’t have a chance to eat lunch.

Although it’s only half as busy as lunch, Phat Thai sells dinner to a dense crowd of families and professionals returning from work. By the time they serve their last customer, clean up, and park the truck, it’s approaching midnight. It’s one of Bobby’s busiest days. He and his brother only do double shifts twice a week. Since dinner is less lucrative than lunch, on other days they finish selling food by 3pm and he can get home by 5pm.

Compared to the original American food trucks (a.k.a. roach coaches) that frequented construction sites and baseball stadiums, food trucks like Phat Thai are a different breed. Instead of cheap, greasy fare, they sell $10 dishes featuring organic ingredients and fusions of different regional cuisines. Since their emergence as a social media sensation in 2008, they have asserted themselves as a force in the food scene, employing celebrated chefs and inspiring countless food reviews.

As a service that strips all the overhead costs of a restaurant down to the minimum requirements for selling food to customers, food trucks are also an irresistible metaphor for lean startups: the Silicon Valley practice of quickly rolling out a minimum viable product, allowing customers to try it, and engaging with them to improve your offering. Just as the falling cost of creating a website or app lowered the barriers to entry in the tech industry, food trucks allow aspiring restauranters to quickly put their creations in front of customers with minimal financial barriers.

But given that starting a restaurant is essentially a respectable way to throw money down a hole, are food trucks just mini money pits? What does it take to start and run a food truck? Why is a sandwich in a paper tray $10? Will food trucks disrupt the restaurant industry or is there a bubble?

by Alex Mayyasi, Priceonomics Blog | Read more:
Photo: uncredited

Frank's Photography on Flickr, Nanjing Road
via:

Paul Krugman Is Brilliant, but Is He Meta-Rational?

Nobel laureate, Princeton economics professor, and New York Times columnist Paul Krugman is a brilliant man. I am not so brilliant. So when Krugman makes strident claims about macroeconomics, a complex subject on which he has significantly more expertise than I do, should I just accept them? How should we evaluate the claims of people much smarter than ourselves?

A starting point for thinking about this question is the work of another Nobelist, Robert Aumann. In 1976, Aumann showed that under certain strong assumptions, disagreement on questions of fact is irrational. Suppose that Krugman and I have read all the same papers about macroeconomics, and we have access to all the same macroeconomic data. Suppose further that we agree that Krugman is smarter than I am. All it should take, according to Aumann, for our beliefs to converge is for us to exchange our views. If we have common “priors” and we are mutually aware of each others’ views, then if we do not agree ex post, at least one of us is being irrational.

It seems natural to conclude, given these facts, that if Krugman and I disagree, the fault lies with me. After all, he is much smarter than I am, so shouldn’t I converge much more to his view than he does to mine?

Not necessarily. One problem is that if I change my belief to match Krugman’s, I would still disagree with a lot of really smart people, including many people as smart as or possibly even smarter than Krugman. These people have read the same macroeconomics literature that Krugman and I have, and they have access to the same data. So the fact that they all disagree with each other on some margin suggests that very few of them behave according to the theory of disagreement. There must be some systematic problem with the beliefs of macroeconomists.

In their paper on disagreement, Tyler Cowen and Robin Hanson grapple with the problem of self-deception. Self-favoring priors, they note, can help to serve other functions besides arriving at the truth. People who “irrationally” believe in themselves are often more successful than those who do not. Because pursuit of the truth is often irrelevant in evolutionary competition, humans have an evolved tendency to hold self-favoring priors and self-deceive about the existence of these priors in ourselves, even though we frequently observe them in others.

Self-deception is in some ways a more serious problem than mere lack of intelligence. It is embarrassing to be caught in a logical contradiction, as a stupid person might be, because it is often impossible to deny. But when accused of disagreeing due to a self-favoring prior, such as having an inflated opinion of one’s own judgment, people can and do simply deny the accusation.

How can we best cope with the problem of self-deception? Cowen and Hanson argue that we should be on the lookout for people who are “meta-rational,” honest truth-seekers who choose opinions as if they understand the problem of disagreement and self-deception. According to the theory of disagreement, meta-rational people will not have disagreements among themselves caused by faith in their own superior knowledge or reasoning ability. The fact that disagreement remains widespread suggests that most people are not meta-rational, or—what seems less likely—that meta-rational people cannot distinguish one another.

by Eli Dourado, The Umlaut | Read more:
Photo:David Shankbone

The Problem with Tumblr and Photography

A little over ten years ago, when I started blogging about photography, most photoblogs were presenting a single photographer’s work, one photograph at a time, usually per day. They were maintained by the photographers themselves. The scene was very small, and there was maybe a slightly naive earnestness about how it was done, which made following those blogs an appealing experience.

In the years since, for better or for worse, many of the ideas driving those early photoblogs have fallen by the wayside, with new formats and platforms replacing each other in a bewildering fashion. Many more photographers have come to embrace the web, in particular the social-networking bits.

Before looking at this in more detail, it might be worthwhile to point out that the internet seems made for photography. Photographs offer an immediacy that survives even under the most adverse, aka attention-deficit-disorder-plagued, circumstances.

Interestingly enough, Tumblr is nothing but a variant of the very early photoblogs on steroids. The basic format is the same: present usually one photograph (or another short snippet of information, like a video, an animated GIF, or a text) at a time. Following other Tumblrs then adds the steroids. While in the past one needed to visit one photoblog after another, Tumblr now offers a seemingly incessant stream of work, all in one place. What’s more, showcasing other people’s photographs appears to have overtaken showcasing one’s own.

This all sounds pretty great, except that there’s a multitude of problems, some of them well-known, others not so much. For starters, a large number of photographers are massively concerned about copyright. If everybody were to ask photographers for permission to showcase their work, Tumblr would grind to a halt in less time than it takes to say the word “copyright.”

This isn’t to say that concerns about copyright are invalid. But photographers worried about it might want to ask themselves what damage is done to their work (and income) if someone showcases their pictures to a possibly larger, possibly different audience, for noncommercial reasons. If a photographer is very concerned, a simple solution would be to not put photographs online. The moment they’re on the web, the medium’s properties kick in; the nature of the internet makes copyright violations incredibly simple. Then again, it also makes it easier to detect and go after those violations (as retailer DNKY just found out).

As far as I’m concerned, the bigger issue is the sloppy attribution of photographs, especially on Tumblr. Often, I find photographs where the source is not given at all, or where it is given in such a way that tracking down the photographer involves considerable work. This translates to a non-fair-use copyright violation, and unfortunately, many Tumblr users — often photographers themselves — are woefully uninformed or unconcerned about this.

by Jörg Colberg, Hyperallergic |  Read more:
Photo: Alec Soth

Thursday, March 14, 2013

The Planet Trillaphon


[ed. After posting a review this morning, I realized that I had never read DFW's Planet Trillaphon and it's Relation to the Bad Thing.  Wonders of the internet, here it is (pdf).]

Gene Kloss (1903-1996) - Rain Cloud at Evening
via:

Elliott Erwitt, New York, 1946
via:

The Curse of “You May Also Like”


Of all the startups that launched last year, Fuzz is certainly one of the most intriguing and the most overlooked. Describing itself as a “people-powered radio” that is completely “robot-free,” Fuzz bucks the trend toward ever greater reliance on algorithms in discovering new music. Fuzz celebrates the role played by human DJs—regular users who are invited to upload their own music to the site in order to create and share their own “radio stations.”

The idea—or, perhaps, hope—behind Fuzz is that human curators can still deliver something that algorithms cannot; it aspires to be the opposite of Pandora, in which the algorithms do all the heavy lifting. As its founder, Jeff Yasuda, told Bloomberg News last September, “there’s a big need for a curated type of experience and just getting back to the belief that the most compelling recommendations come from a human being.”

But while Fuzz's launch attracted little attention, the growing role of algorithms in all stages of artistic production is becoming impossible to ignore. Most recently, this role was highlighted by Andrew Leonard, the technology critic for Salon, in an intriguing article about House of Cards, Netflix's first foray into original programming. The series' origin myth is by now well-known: Having studied its user logs, Netflix discovered that a remake of the British series of the same name could be a huge hit, especially if it also featured Kevin Spacey and was directed by David Fincher.

“Can the auteur survive in an age when computer algorithms are the ultimate focus group?” asked Leonard. He wondered how the massive amounts of data that Netflix has gathered while users were streaming the first season of the series—how many times did they click the pause button?—would affect future episodes.

Many other industries are facing similar questions. For example, Amazon, through its Kindle e-reader, collects vast troves of information about reading habits of its users: what books they finish and what books they don't; what sections they tend to skip and which they read most diligently; how often they look up certain words in the dictionary and underline passages. (Amazon is hardly alone here: Other e-book players are as guilty.)

Based on all these data, Amazon can predict the ingredients that will make you keep clicking to the very end of the book. Perhaps Amazon could even give you alternate endings—just to make you happier. As a recent paper on the future of entertainment puts it, ours is a world where “stories can become adaptive algorithms, creating a more engaging and interactive future.”

Just as Netflix has figured out that, given all their data, it would be stupid not to enter the filmmaking business, so has Amazon discovered that it would be stupid not to enter the publishing business. Amazon's knowledge, however, goes deeper than Netflix's: Since it also runs a site where we buy books, it knows everything that there's to know about our buying behavior and the prices that we are willing to pay. Today Amazon runs half a dozen publishing imprints and plans to add more.

by Evgeny Morozov, Slate | Read more:
Photo by Beck Diefenbach/Reuters