Monday, November 17, 2014

The Copyright Monopoly Wars Are About to Repeat, But Much Worse

People sometimes ask me when I started questioning if the copyright monopoly laws were just, proper, or indeed sane. I respond truthfully that it was about 1985, when we were sharing music on cassette tapes and the copyright industry called us thieves, murderers, rapists, arsonists, and genocidals for manufacturing our own copies without their permission.

Politicians didn’t care about the issue, but handwaved away the copyright industry by giving them private taxation rights on cassette tapes, a taxation right that would later infest anything with digital storage capacity, ranging from games consoles to digital cameras.

In 1990, I bought my first modem, connecting to FidoNet, an amateur precursor to the Internet that had similar addressing and routing. We were basically doing what the Internet is used for today: chatting, discussing, sharing music and other files, buying and selling stuff, and yes, dating and flirting. Today, we do basically the same things in prettier colors, faster, and more realtime, on considerably smaller devices. But the social mechanisms are the same.

The politicians were absolutely clueless.

The first signal that something was seriously wrong in the heads of politicans was when they created a DMCA-like law in Sweden in 1990, one that made a server owner legally liable for forum posts made by somebody else on that server, if the server operator didn’t delete the forum post on notice. For the first time in modern history, a messenger had been made formally responsible for somebody else’s uttered opinion. People who were taking part in creating the Internet at the time went to Parliament to try to explain the technology and the social contract of responsibilities, and walked away utterly disappointed and desperate. The politicians were even more clueless than imagined.

It hasn’t gotten better since. Cory Doctorow’s observation in his brilliant speech about the coming war on general computing was right: Politicians are clueless about the Internet because they don’t care about the Internet. They care about energy, healthcare, defense, education, and taxes, because they only understand the problems that defined the structures of the two previous generations – the structures now in power have simply retained their original definition, and those are the structures that put today’s politicians in power. Those structures are incapable of adapting to irrelevance.

Enter bitcoin.

The unlicensed manufacturing of movie and music copies were and are such small time potatoes the politicians just didn’t and don’t have time for it, because energy healthcare defense. Creating draconian laws that threaten the Internet wasn’t an “I think this is a good idea” behavior. It has been a “copyright industry, get out of my face” behavior. The copryight industry understands this perfectly, of course, and throws tantrums about every five years to get more police-like powers, taxpayer money, and rent from the public coffers. Only when the population has been more in the face of politicians than the copyright industry – think SOPA, ACTA – have the politicians backpedaled, usually with a confused look on their faces, and then absentmindedly happened to do the right thing before going back to energy healthcare defense.

However, cryptocurrency like bitcoin – essentially the same social mechanisms, same social protocols, same distributed principles as BitTorrent’s sharing culture and knowledge outside of the copyright industry’s monopolies – is not something that passes unnoticed. Like BitTorrent showed the obsolescence of the copyright monopoly, bitcoin demonstrates the obsolescence of central banks and today’s entire financial sector. Like BitTorrent didn’t go head-to-head with the copyright monopoly but just circumvented it as irrelevant, bitcoin circumvents every single financial regulation as irrelevant. And like BitTorrent saw uptake in the millions, so does bitcoin.

Cryptocurrency is politically where culture-sharing was in about 1985.

Politicians didn’t care about the copyright monopoly. They didn’t. Don’t. No, they don’t, not in the slightest. That’s why the copyright industry has been given everything they point at. Now for today’s million dollar question: do you think politicians care about the authority of the central bank and the core controllability of funds, finances, and taxation?

YES. VERY MUCH.

This is going to get seriously ugly. But this time, we have a blueprint from the copyright monopoly wars. Cory Doctorow was right when he said this isn’t the war, this is just the first skirmish over control of society as a whole. The Internet generation is claiming that control, and the old industrial generation is pushing back. Hard.

by Rick Falkvinge, TorrentFreak |  Read more:
Image: uncredited

Why the Selfie Is Here to Stay

Selfies: such a cute niblet of a word, and yet I curse the day it was coined—it’s like a decal that won’t come unpeeled. Taking a picture of yourself with outstretched arm seems so innocent and innocuous, but what a pushy, wall-tiling tableau it has become—a plague of “duckfaces” and gang signs and James Franco (the Prince of Pose) staredowns. In my precarious faith in humankind’s evolution, I had conned myself into hoping, wishing, yearning that taking and sharing selfies would be a viral phase in the Facebook millennium, burning itself out like so many fads before, or at least receding into a manageable niche in the Internet arcade after reaching its saturation point. When Ellen DeGeneres snapped the all-star group selfie during the live broadcast of the 2014 Academy Awards, a say-cheese image that was re-tweeted more than two million times, it seemed as if that might be the peak of the selfie craze—what could top it? Once something becomes that commercialized and institutionalized, it’s usually over, but nothing is truly over now—the traditional cycles of out-with-the-old-in-with-the-new have been repealed, flattened into a continuous present. Nothing can undo the crabgrass profusion of the selfie, not even its capacity as an instrument of auto-ruination.

It has proved itself again and again to be a tool of the Devil in the wrong, dumb hands, as then congressman Anthony Weiner learned when he shared a selfie of his groin district, driving a stake through a once promising, power-hungry political career. A serial bank robber in Michigan was apprehended after posting a Facebook selfie featuring the gun presumably used in the holdups. A woman in Illinois was arrested after she modeled for a selfie wearing the outfit she had just nicked from a boutique. A pair of meth heads were busted for “abandonment of a corpse” after they partook of a selfie with a pal who had allegedly overdosed on Dilaudid, then uploaded the incriminating evidence to Facebook. Tweakers have never been known for lucid behavior, but one expects more propriety from professional men and women in white coats, which is why it was a shock-wave scandale when Joan Rivers’s personal ear-nose-and-throat doctor, Gwen Korovin, was accused of taking a selfie while Rivers was conked out on anesthesia. Korovin emphatically denies taking a sneaky self-peeky, and had the procedure been smooth sailing this story would have fluttered about as a one-day wonder, a momentary sideshow. But Rivers didn’t survive, she went down for the count, and Korovin’s name, fairly or not, was dragged through the immeasurable mire of the Internet. (...)

Times Square selfies, even those involving a shish kebab device, are an improvement over the more prevalent custom of visitors’ asking passersby such as myself, “Would you mind taking a picture of us?,” and offering me their camera. Selfies at least spare the rest of us on our vital rounds. But it is difficult to find any upside to the indulgence of selfies in public places intended as sites of remembrance and contemplation. There is a minor epidemic of visitors taking grinning selfies at the 9/11 Memorial pools. And it isn’t just students on school trips for whom social media is the only context they have; it’s also adults who treat the 9/11 Memorial as if it were just another sightseeing spot, holding their camera aloft and taking a selfie, indifferent or oblivious to the names of the dead victims of the 1993 and 2001 attacks inscribed on the bronze panels against which some of them are leaning. I consider myself fortunate that I was able to visit the Vietnam Veterans Memorial, in Washington, D.C., before the advent of the selfie: the reflective walls etched with the names didn’t serve as a backdrop for a personal photo op. Today no spot is safe from selfie antics. Outrage exploded over a teenage girl posting a grinning selfie in front of Auschwitz, outrage that was compounded when she reacted to the ruckus by chirping in response, “I’m famous yall.”

There are those who analyze and rationalize the taking of selfies at former concentration camps or some stretch of hallowed ground as being a more complex and dialectical phenomenon than idle, bovine narcissism—as being an exercise in transactional mediation between personal identity and historical legacy, “placing” oneself within a storied iconography. Sounds like heavy hooey to me, if only because the taking of selfies seems to be more of a self-perpetuating process whose true purpose is the production of other selfies—self-documentation for its own sake, a form of primping that accumulates into a mosaic that may become fascinating in retrospect or as boring as home movies. Turning yourself into a Flat Stanley in front of a landmark doesn’t seem like much of a quest route into a deeper interiority, just as the museum-goers who take selfies in front of famous paintings and sculptures are unlikely to be deepening their aesthetic appreciation. Consider the dope who, intending to nab an action selfie, reportedly climbed onto the lap of a 19th-century sculpture in an Italian museum, a copy of a Greek original, only to smash the figure, snapping off one of its legs above the knee. As if weary-on-their-feet museum guards didn’t have enough to deal with.

by James Wolcott, Vanity Fair |  Read more:
Image: Darrow/Arte & Imagini SRL/Corbis

Sunday, November 16, 2014

Retire Already: The Forever Professors


The 1994 law ending mandatory retirement at age 70 for university professors substantially mitigated the problem of age discrimination within universities. But out of this law a vexing new problem has emerged—a graying—yea, whitening—professoriate. The law, which allows tenured faculty members to teach as long as they want—well past 70, or until they’re carried out of the classroom on a gurney—means professors are increasingly delaying retirement past age 70 or even choosing not to retire at all.

Like so much else in American life, deciding when to retire from academe has evolved into a strictly private and personal matter, without any guiding rules, ethical context, or sense of obligation to do what’s best—for one’s students, department, or institution. Only the vaguest questions—and sometimes not even those—are legally permitted. An administrator’s asking, "When do you think you might retire?" can bring on an EEOC complaint or a lawsuit. Substantive departmental or faculty discussions about retirement simply do not occur.

University professors may be more educated than the average American, but now that there’s no mandatory retirement age, their decisions about when to leave prove that they are as self-interested as any of their countrymen. When professors continue to teach past 70, they behave in exactly the same way as when we decide to drive a car on a national holiday. Who among us stops to connect the dots between our decision to drive and a traffic jam, or that traffic jam and global warming?

Despite the boomer claim that 70 is the new 50, and the actuarial fact that those who live in industrialized countries and make it to the age of 65 have a life expectancy reaching well into the 80s, 70 remains what it has always been—old. By the one measure that should count for college faculty—how college students perceive their professors—it is definitively old. Keeping physically fit, wearing Levi’s, posting pictures on Instagram, or continually sneaking peeks at one’s iPhone don’t count for squat with students, who, after all, have grandparents who are 70, if not younger.

To invoke Horace, professors can drive out Nature with a pitchfork, but she’ll come right back in. Aging is Nature’s domain, and cannot be kneaded into a relativist cultural construct. It’s her means of leading us onto the off-ramp of life.

Professors approaching 70 who are still enamored with hanging out with students and colleagues, or even fretting about money, have an ethical obligation to step back and think seriously about quitting. If they do remain on the job, they should at least openly acknowledge they’re doing it mostly for themselves.

Of course, there are exceptions. Some professors, especially in the humanities, become more brilliant as they grow older—coming up with their best ideas and delivering sagacity to their students. And some research scientists haul in the big bucks even when they’re old. But those cases are much rarer than older professors vainly like to think. (...)

The average age for all tenured professors nationwide is now approaching 55 and creeping upward; the number of professors 65 and older more than doubled between 2000 and 2011. In spite of those numbers, according to a Fidelity Investments study conducted about a year ago, three-quarters of professors between 49 and 67 say they will either delay retirement past age 65 or—gasp!—never retire at all. They ignore, or are oblivious to, the larger implications for their students, their departments, and their colleges.

And they delude themselves about their reasons for hanging on. In the Fidelity survey, 80 percent of those responding said their primary reason for wanting to continue as faculty members was not that they needed the money but for "personal or professional" reasons. A Fidelity spokesman offered what seemed to me a naïve interpretation of that answer: "Higher-education employees, especially faculty, are deeply committed to their students, education, and the institutions they serve."

Maybe. But "commitment to higher education" covers some selfish pleasures.

by Laurie Fendrich, Chronicle of Higher Education | Read more:
Image: Scott Seymour

Leaving Shame on a Lower Floor


Among the many vertiginous renderings for the penthouse apartments at 432 Park Avenue, the nearly 1,400-foot-high Cuisenaire rod that topped off last month, is one of its master (or mistress) of the universe bathrooms, a glittering, reflective container of glass and marble. The image shows a huge egg-shaped tub planted before a 10-foot-square window, 90 or more stories up. All of Lower Manhattan is spread out like the view from someone’s private plane.

Talk about power washing.

The dizzying aerial baths at 432 Park, while certainly the highest in the city, are not the only exposed throne rooms in New York. All across Manhattan, in glassy towers soon to be built or nearing completion, see-through chambers will flaunt their owners, naked, toweled or robed, like so many museum vitrines — although the audience for all this exposure is probably avian, not human.

It seems the former touchstones of bathroom luxury (Edwardian England, say, or ancient Rome) have been replaced by the glass cube of the Apple store on Fifth Avenue. In fact, Richard Dubrow, marketing director at Macklowe Properties, which built 432 and that Apple store, described the penthouse “wet rooms” (or shower rooms) in just those terms.

Everyone wants a window, said Vickey Barron, a broker at Douglas Elliman and director of sales at Walker Tower, a conversion of the old Verizon building on West 18th Street. “But now it has to be ­ a Window.” She made air quotes around the word. “Now what most people wanted in their living rooms, they want in their bathrooms. They’ll say, ‘What? No View?’ ” (...)

From the corner bathrooms at 215 Chrystie Street, Ian Schrager’s upcoming Lower East Side entry designed by Herzog & De Meuron and with interior architecture by the English minimalist John Pawson, you can see the Chrysler Building and the 59th Street Bridge, if you don’t pass out from vertigo. The 19-foot-long bathrooms of the full-floor apartments are placed at the building’s seamless glass corners. It was Mr. Pawson who designed the poured concrete tub that oversees that sheer 90-degree angle.

Just looking at the renderings, this reporter had to stifle the urge to duck.

“Ian’s approach is always, If there’s a view, there should be glass,” Mr. Pawson said. “It’s not about putting yourself on show, it’s about enjoying what’s outside. Any exhibitionism is an unfortunate by-product. I think what’s really nice is that at this level you’re creating a gathering space. You can congregate in the bathroom, you can even share the bath or bring a chair in.”

by Penelope Green, NY Times |  Read more:
Image: DBOX for CIM Group & Macklowe Properties

Saturday, November 15, 2014


Mark Smith
via:

Taming the Wild Tuna

Kushimoto, Japan— Tokihiko Okada was on his boat one recent morning when his cellphone rang with an urgent order from a Tokyo department store. Its gourmet food section was running low on sashimi. Could he rustle up an extra tuna right away?

Mr. Okada, a researcher at Osaka’s Kinki University, was only too happy to oblige—and he didn’t need a fishing pole or a net. Instead, he relayed the message to a diver who plunged into a round pen with an electric harpoon and stunned an 88-pound Pacific bluefin tuna, raised from birth in captivity. It was pulled out and slaughtered immediately on the boat.

Not long ago, full farming of tuna was considered impossible. Now the business is beginning to take off, as part of a broader revolution in aquaculture that is radically changing the world’s food supply.

“We get so many orders these days that we have been catching them before we can give them enough time to grow,” said Mr. Okada, a tanned 57-year-old who is both academic and entrepreneur. “One more year in the water, and this fish would have been much fatter,” as much as 130 pounds, he added.

With a decadeslong global consumption boom depleting natural fish populations of all kinds, demand is increasingly being met by farm-grown seafood. In 2012, farmed fish accounted for a record 42.2% of global output, compared with 13.4% in 1990 and 25.7% in 2000. A full 56% of global shrimp consumption now comes from farms, mostly in Southeast Asia and China. Oysters are started in hatcheries and then seeded in ocean beds. Atlantic salmon farming, which only started in earnest in the mid-1980s, now accounts for 99% of world-wide production—so much so that it has drawn criticism for polluting local water systems and spreading diseases to wild fish.

Until recently, the Pacific bluefin tuna defied this sort of domestication. The bluefin can weigh as much as 900 pounds and barrels through the seas at up to 30 miles an hour. Over a month, it may roam thousands of miles of the Pacific. The massive creature is also moody, easily disturbed by light, noise or subtle changes in the water temperature. It hurtles through the water in a straight line, making it prone to fatal collisions in captivity.

The Japanese treasure the fish’s rich red meat so much that they call it “hon-maguro” or “true tuna.” Others call it the Porsche of the sea. At an auction in Tokyo, a single bluefin once sold for $1.5 million, or $3,000 a pound.

All this has put the wild Pacific bluefin tuna in a perilous state. Stocks today are less than one-fifth of their peak in the early 1960s, around the time Japanese industrial freezer ships began prowling the oceans, according to an estimate by an international governmental committee monitoring tuna fishing in the Pacific. The wild population is now estimated by that committee at 44,848 tons, or roughly nine million fish, down nearly 50% in the past decade.

The decline has been exacerbated by earlier efforts to cultivate tuna. Fishermen often catch juvenile fish in the wild that are then raised to adulthood in pens. The practice cuts short the breeding cycle by removing much of the next generation from the seas.

Scientists at Kinki University decided to take a different approach. Kinki began studying aquaculture after World War II in an effort to ease food shortages. Under the motto “Till the Ocean,” researchers built expertise in breeding fish popular in the Japanese diet such as flounder and amberjack.

In 1969, long before the world started craving fresh slices of fatty tuna, Kinki embarked on a quest to tame the bluefin. It sought to complete the reproduction cycle, with Pacific bluefin tuna eggs, babies, juveniles and adults all in the farming system.

Two scientists from Kinki went out to sea with local fishermen, seeking to capture juvenile tuna for raising in captivity. “We researchers always wanted to raise bluefin because it’s big and fast. It’s so special,” said one of the scientists, Hidemi Kumai, now 79 years old. “We knew from the beginning it was going to be a huge challenge.”

It was more than that. The moment the researchers grabbed a few juvenile fish out of a net, the skin started to disintegrate, killing them. It took four years just to perfect delicate fast-releasing hooks for capturing juveniles and moving them into pens.

“Local fishermen used to say to us, ‘Professors, you are crazy. Bluefin can’t live in confinement,’ ” Mr. Kumai recalled.

In 2011, Kinki lost more than 300 grown fish out of is stock of 2,600 after an earthquake-triggered tsunami hit a coastline 400 miles away. The tsunami triggered a quick shift in tide and clouded the water, causing the fish to panic and smash into nets. Last year, a typhoon decimated its stock. Again this summer, frequent typhoons kept the researchers on their toes as they waited for the breeding season to start. “Oftentimes, all we can do is pray,” said Mr. Okada as he threw a mound of mackerel into the pen using a spade.

It took nearly 10 years for fish caught in the wild to lay eggs at Kinki’s research pens. Then, in 1983, they stopped laying, and for 11 years, researchers couldn’t figure out the problem. The Kinki scientists now attribute the hiatus to intraday drops in water temperature, a lesson learned only after successful breeding at a separate facility in southern Japan.

In the summer of 1994, the fish finally produced eggs again. The researchers celebrated and put nearly 2,000 baby fish in an offshore pen. The next morning, most of them were dead with their neck bones broken. The cause was a mystery until a clue came weeks later. Some of the babies in the lab panicked when the lights came on after a temporary blackout and killed themselves.

Mr. Kumai and colleagues realized that sudden bright light from a car, fireworks or lightning caused the fish to panic and bump into each other or into the walls. The solution was to keep the lights on at all times.

For nearly five decades, Mr. Kumai has lived along a quiet inlet, steps from the university’s research pens. He calls the fish “my family.”

“These fish can’t protest with their mouths so they protest by dying,” he says. “We must listen to them carefully so we catch the problems before they resort to dying.”

by Yuka Hayashi, WSJ |  Read more:
Image: Jereme Souteyrat for the WSJ 

Robert Longo
via:

The Ice-Bucket Racket

Ever since the ice-bucket challenge swept the Internet this summer, raising more than $115 million for A.L.S. research, a legion of imitators has sprung up to try and cash in themselves. In the approaching holiday season, as fund-raising appeals swell, we can now smash a pie in our faces, snap selfies first thing in the morning or take a photo of ourselves grabbing our crotches, among other tasteful gestures, to express solidarity with various worthy causes. But the failure of these newer gimmicks to enjoy anywhere near the same popularity as the frigid original demonstrates the peculiar and finicky nature of our altruism — a psychological puzzle that both scientists and economists are trying to decipher. (...)

Most charitable efforts elicit our sympathy by showing us photographs of the afflicted and telling us tales of suffering. But just as people avert their eyes from beggars, most of us can shift our attention from stuff that depresses us. One great curiosity, and advantage, of the ice-bucket challenge was that it did very little to remind us of the disease that was its supposed inspiration.

Fund-raising professionals hoping to decode the magic of the challenge, however, will be dispirited to learn that this master game plan wasn’t exactly intentional. According to Josh Levin, a writer at Slate, the challenge appears to have instead emerged spontaneously from similar dares, like “polar plunges” into ice-cold lakes. At first, it simply consisted of using social media to dare others to dump a pail of ice water over themselves. Later, participants began donating $100 to any of a wide variety of charities. It became linked to A.L.S. only later, when a couple of pro golfers took the challenge and chose that as their good cause.

Sander van der Linden, a social psychologist at Princeton University who has researched attitude change, thinks several factors allowed the ice-bucket challenge to become a viral and fund-raising sensation. Its public nature forced people to either accept the task or suffer damage to their reputations. Other stimulants to action included peer pressure from friends, the “helper’s high” that results from aiding others and the fortuitous participation of celebrities like Bill Gates and Katy Perry. Particularly crucial was the 24-hour deadline that the challenge gave to either drench oneself or shell out."When you make people set specific goals, they become more likely to change behavior,” van der Linden told me. “People like setting goals, and they like achieving goals.”

Two additional features were particularly clever, according to van der Linden. One was the ingenious way that the challenge fed our collective narcissism by allowing us to celebrate with selfies or videos of our drenched faces and bodies on Facebook and Twitter. An even deeper motivation may have been the precisely calibrated amount of self-sacrifice involved. “If you’re going to elicit money from people, it helps to have some way of doing it that is at least slightly painful, since that makes the whole experience about more than just giving away what may be a relatively trivial amount of money,” van der Linden said.

by Ian McGugan, NY Times |  Read more:
Image: Joon Mo Kang

Friday, November 14, 2014

The Anxiety of the Forever Renter

What no economist has measured is this: There’s something fundamentally demeaning about being a renter, about having to ask permission to change the showerhead, about having to mentally deduct future losses from deposit checks for each nail hammered into the wall to hang family photos. There’s something degrading about the annual rent increase that comes with this implied taunt to its captive audience: What are you going to do, move out?

I’m not worried about what it would mean for us to be a Nation of Renters, whether that would fray the social fabric or unravel homeownership’s side effects on civic participation or crime rates. Some people are worried about this. “FDR mentioned that 'a nation of homeowners is unconquerable,'” I heard the chief economist for the National Association of Realtors a few months ago tell a room full of policymakers suspicious of the mortgage home interest deduction. “We have to think,” he pleaded, “that maybe there is something more than numbers to a homeownership society” – as if we might devolve into some kind of chaos if enough of us didn’t care enough about our property to own it.

What I am worried about is the dill plant on my second-floor windowsill. I rotate it a little bit every day because it only gets sun from the western exposure. It has been dying since the day I brought it home. I want to put it in the ground, or at least outside. For several weeks over the summer, I tried furtively growing oregano in a small pot on the communal front stoop of our 20-unit red-brick apartment building. I carried cups of water out to it late at night when I thought no one was looking.

Eventually, it disappeared.

Earlier this year, my husband and I took a deep breath, purchased a power tool and did something permanent about our kitchen-storage problem: We drilled metal Ikea pot racks into the wall. Today the room is happily lined with saucepans. But every time I see the property manager coming or going from the building, I worry that she’ll ask to enter our unit, where she’ll spy what we’ve done to drywall that doesn’t belong to us.

More recently, my husband called our property manager to announce a long-awaited addition to our household that we thought would be welcome.

“I just got a job,” he told her, literally on the day that he had just gotten a job. “And my wife said when I get a job, I can have a dog. So I’m calling to tell you I’m getting a dog.”

As it turns out, we will not be getting a dog.

“You can have a cat,” she offered. (...)

Now we have each been at this – renting – for about a decade, and we’re reaching that point, married, starting our 30s, when it starts to feel like time to live in a more dignified way. We want to grow herbs outdoors and shop in the heavy-duty hardware store aisles and change the color of our living room. We want to make irreversible choices about wall fixtures and rash decisions at the animal shelter.

I've been thinking about all of these things a lot lately, while reading about the convincing reasons why homeownership no longer makes as much sense as it used to. Workers are no longer tied to factories – and the bedroom communities that surround them – because no one works in factories anymore. Now people telecommute. They get transferred to Japan indefinitely. Companies no longer offer the implicit contract of lifetime employment for hard workers, and so hard workers think nothing of updating their résumés every day.

And I think about my own transience: I’ve lived in eight apartments in six cities over the past nine years. My husband and I like to pick up and move (most recently, just eight blocks down the street from our previous place) as if we were selecting a new grocery store. We have a motto as a couple, which applies equally to weekend and life plans: “We’ll see how we feel,” we say.

We have trouble thinking beyond the nearest horizon, not because we don’t like the idea of commitment, but because we want to be free to theoretically commit to anything that may come up tomorrow. What if an incredible job offer wants to relocate us to Riyadh? What if we wake up Saturday morning and decide that we’ve tired of Washington, D.C.? What if – as many of our friends have experienced – one of us loses a job?

We’re both afflicted with a dangerous daydreaming ability to envision ourselves living anywhere we step off a plane. We never take a trip and think, “It’s wonderful to visit friends in Seattle,” or “Chicago is a great place for tourists in the summertime.” We always think: What if we lived here? Maybe we should live here? We could live in Key West! My husband has never even been to Portland, but we still nurse a sneaking suspicion that we should probably be living there.

In this way, we are the quintessential young professionals of the new economy – restless knowledge workers who deal in “projects,” not “careers,” who can no sooner commit to a mortgage than we can a lifetime of desk work. Our attitude is a national epidemic. It’s harder to get a mortgage today than it was 10 years ago. But a lot of people also just don’t want one any more. At the height of the housing boom, 69 percent of American households owned their homes. Housing researcher Arthur Nelson predicted to me that number would fall to 62 percent by 2020, meaning every residence built between now and then will need to be a rental.

I haven’t been able to figure out in my own household, however, how this aversion to permanence can coexist with our rising ire about renting. And I don’t know how whole cities will accommodate this new demographic: the middle-class forever renter.

Both Nelson and Florida have floated the idea that we need some kind of hybrid rental/homeownership model, some system that decouples “renter” status from income class, while allowing professionals who would have been homeowners 20 years ago to live in a comparable setting without the millstone. Maybe we allow renters to customize their homes as if they owned them, or we enable condo owners to quickly unload property to rental agents.

Short of putting us all in houseboats, I don’t know what these hybrid homes would look like, how they’d be paid for or if anyone will be willing to build them. But I suspect the trick lies outside of the architectural and financial details, that it lies in removing that fear of the approaching property manager, that lack of control over a dying dill plant. It lies in creating a feeling of ownership without the actual deed.

by Emily Badger, CityLab |  Read more:
Image: Reuters

The Nightmare (Caminito del Rey. Álora, Málaga, Spain)
via:

Martha Rosler, Bathroom Surveillance or Vanity Eye, from the series Body Beautiful, or Beauty Knows No Pain c. 1967-1972

Inside the NFL’s Replay Command Center

The digital clock on the wall inside the officiating bunker at the NFL offices in Manhattan reads 16:25:15, or 4:25 p.m. on the East Coast. “Kickoff in Oakland,” says Austin Moss, the replay technician at the station monitoring the Cardinals-Raiders game. The upper-left 27-inch high-definition screen, one of four in front of Moss, shows Arizona kicker Chandler Catanzaro booting the opening kickoff into the end zone.

If you’re not careful, or you’re over-caffeinated, you can easily suffer sensory overload in this room, especially in the early window. On this Week 7 Sunday there were eight 1 p.m. ET kickoffs. But even in the late window, with only three games coming at you inside the room, it’s harried. Standing in the center, staring at the large split-screen monitor showing all three games, is the NFL’s vice president of officiating, Dean Blandino, dressed in khakis, a blue striped oxford shirt and blue sweater. To his left: Alberto Riveron, the number two man in the officiating department. They’re the adjudicators on this day—the two men in charge of the new system of replay checks-and-balances.

This room is called Art McNally GameDay Central, in honor of the longtime official and officiating executive. But on Sunday it’s Replay Central. In this space, 42 feet long and 36 feet wide, Blandino and Riveron ride herd on the first-year replay system and consult with referees on the field and replay officials in far-off stadium replay booths for every review in every NFL game.

The system was put in place to minimize the inconsistency in replay reviews, and to reduce the time the average review takes. While the referee on site is going through the preliminary mechanics of the replay process—checking for the challenge flag, communicating with the flag-throwing coach, announcing to the crowd and the television audience why the play is being reviewed and hustling to get under the replay hood on the sideline—Blandino can look at the replays and line up the one or two or three most applicable. That way, the ref on the field will be able to watch the relevant replays without having to spend time going through the entire range of them himself. The ref also has the voice of New York in his ear, telling him what’s important and eliminating the fluff.

But the natural question is: Are too many cooks spoiling the broth? Is replay actually better when the league office and the millions it can spend on technology—there are 82 television monitors and 21 NFL employees in this room the size of a Manhattan studio apartment—intercede in the business of making sure the seven-man on-field crews get third-down spots correct? Or is the New York influence too much Big Brother?

by Peter King, SI |  Read more:
Image: NFL

Why Banksy Is (Probably) a Woman


Banksy Does New York, a new documentary airing on HBO on Nov. 17, opens on a bunch of scofflaws trying to jack an inflatable word balloon reading “Banksy!” from the side of a low-rise building in Queens. These hooligans weren’t Banksy. Neither were the police officers who took possession of the piece after the failed heist and denied that it was art. Nor in all likelihood was the silver-haired man who sold $420 worth of Banksy prints for $60 a pop in Central Park, or the drivers who slowly trawled New York streets in trucks tricked out with Banksy’s sculpture, or the accordionist accompanying one of Banksy’s installations. While the film shares a lot of insights about street art, media sensationalism, viral phenomena, and the people who make Banksy possible, it doesn’t cast a light on who Banksy is or what she looks like. (...)

In the 2010 film Exit Through the Gift Shop, another documentary about street art, Banksy appears as an anonymous figure whose voice is disguised, but who is plainly a man. So that would seem to put the question to rest. Further to the point, the street artist Shepard Fairey referred to Banksy as “he” and “him”throughout an interview with Brian Lehrer the same year. Fairey would be in a position to know, presumably: He’s the closest thing Banksy has to a colleague. Fairey says that Banksy insists on anonymity, in part, to manage his image in the press. “He controls the way his message is put out very carefully,” Fairey says in the interview.

Yet these pieces of evidence confuse rather than clarify the issue. Exit Through the Gift Shop is a classic piece of misdirection. Over the course of the movie, the film’s would-be documentarian, Thierry Guetta, is exposed as a poor filmmaker. Partway through, Banksy takes over the production, turning it into a documentary about the documentarian instead. To complete the meta romp, Guetta, working under the nomme de rue Mr. Brainwash, proceeds to rips off Banksy’s style. All of this means that Fairey, Banksy’s co-conspirator in Banksy’s film, is an unreliable narrator.

During the very first interview that Banksy gave to The Guardian, another figure was present (“Steve,” Banksy’s agent). Another figure is always present, says Canadian media artist Chris Healey, who has maintained since 2010 that Banksy is a team of seven artists led by a woman—potentially the same woman with long blonde hair who appears in scenes depicting Banksy’s alleged studio in Exit Through the Gift Shop. Although Healey won’t identify the direct source for his highly specific claim, it’s at least as believable as the suggestion that Banksy is and always has been a single man. (...)

Part of what makes Banksy’s work so popular is that it doesn’t operate much like street art at all. Think about Invader or Fairey, artists who appear in Exit Through the Gift Shop: Invader’s 8-bit career began with a single “Space Invaders” icon that the artist reiterated endlessly. Fairey’s work started with a stencil of Andre the Giant prefaced by the word “Obey,” again, repeated over and over. While they’re both more like media moguls than graffiti writers today, Fairey and Invader started with the same strategy: to project themselves into public spaces by broadcasting themselves all over it.

That ambition to control a public space through this sort of redundant branding, to make the street your own, is a masculine one—and it’s shared by the overwhelming majority of street artists. (...)

Compared to the highly visible work of Invader or Fairey or dozens of other high-profile street artists, Banksy’s work is different. Girls and women figure into Banksy’s stenciled figures, for starters, something that isn’t true of 99% of street art. Banksy’s work has always done more than project “Banksy” ad nauseum. (In fact, a “handling service” called Pest Control exists to authenticate Banksy’s protean projects.) Banksy’s graffiti understands and predicates a relationship between the viewer and the street, something that graffiti that merely shouts the artist’s name or icon over and over (and over and over) doesn’t do.

Maybe it gives Banksy too much credit to say that her work shows a greater capacity for imagining being in someone else’s shoes.

by Kriston Capps, CityLab |  Read more:
Image: Andrew Winning/Reuters

Thursday, November 13, 2014