Thursday, November 13, 2014

Fallen Arches: Can McDonald's Get Its Mojo Back?


[ed. I liked those wings, but they were pretty dang expensive.]

Perhaps no episode captures what’s ailing the world’s largest restaurant company better than the Mighty Wings Debacle of 2013. In September of last year, McDonald’s launched an ambitious program to sell deep-fried chicken wings across its 14,000 U.S. locations. The wings were a staple in Hong Kong, where the crisp cayenne-and-chili-pepper coating was developed. And a similar version tickled palates in Atlanta during testing. One blogger wrote: “Holy crap, those are really freakin’ good.” The wings were giant (“bone in,” as the jargon went) and meaty. And by the end of the heavily advertised eight-week promotion, McDonald’s was left with 10 million pounds of unsold chicken, a whopping 20% of its inventory. The Mighty Wings didn’t flap.

At corporate headquarters in Oak Brook, Ill., executives began pointing fingers. Some blamed the coating, which was too spicy for broad American tastes, they said. Some blamed the price, at a hefty $1 per wing. A box of five Mightys cost a buck more than the equivalent number at KFC. McDonald’s had justified the lofty price because the wings were so immense, taken from its suppliers’ gigantic eight-pound chickens. The wings were arguably a bona fide deal. But this brings up problem No. 3: Customers didn’t make that connection. Cost-conscious diners gazing up at the menu didn’t realize they’d be getting “absurdly huge drumettes,” as the blogger put it. “This was quality for price,” a former executive tells Fortune, “but McDonald’s is known for quantity for price.” McDonald’s might have thought they were value. Customers simply viewed them as expensive.

CEO Don Thompson, then in the job for a little over a year, had needed the wings to be a hit. The company’s performance had slipped on his watch, suffering from disappointing sales growth and deteriorating margins. Since then things have gotten worse—much worse. In late October, McDonald’s reported a significant loss of market share and its fourth straight quarter of negative same-store sales in its U.S. operations. Overall, the company reported a distressing 30% decline in profit. Expenses were growing even as sales were falling—a big problem for any company.

Analysts are now predicting that 2014 will be the first year of negative global same-store sales since 2002. “People have seen results go from the best in the industry to one of the worst in the course of three years,” says Stephens analyst Will Slabaugh. The year has been written off—there will be no bonuses for anybody.

Some of the pressures facing the company are beyond its control: higher commodity costs, fiercer competition, a restaurant industry showing little to no growth, and a strapped lower-income consumer. There have also been a handful of one-off disasters, including a supplier in China accused of selling expired meat and the closure of nine company-owned restaurants by the government in Russia. With its $28.1 billion in revenue—the average McDonald’s restaurant brings in $2.6 million in sales, compared to Burger King’s $1.2 million, according to research firm Technomic—the company’s scale makes it harder to move the needle. McDonald’s size makes it a target too, putting it in the cross hairs of minimum wage and nutrition battles.

But the company has even bigger—dare we say, Mighty Wing–size—challenges, not least of which is an existential one: McDonald’s is the quintessential quick-serve restaurant. It has risen to the top of the fast-food chain by being comfortably, familiarly, iconically “mass market” and so ubiquitous as to be the Platonic ideal of “convenient.” Neither of these selling points, however, is as high as it was even a decade ago on Americans’ list of dining priorities. A growing segment of restaurant goers are choosing “fresh and healthy” over “fast and convenient,” and McDonald’s is having trouble convincing consumers that it’s both. Or even can be both. “It is a battle over perception, and they’re losing,” says Aaron Allen, a global restaurant consultant.

by Beth Kowitt, Fortune | Read more:
Image: Adam Voorhes

The Mercenaries

Ex-NSA hackers and their corporate clients are stretching legal boundaries and shaping the future of cyberwar.

Bright twenty- and thirtysomethings clad in polo shirts and jeans perch on red Herman Miller chairs in front of silver Apple laptops and sleek, flat-screen monitors. They might be munching on catered lunch—brought in once a week—or scrounging the fully stocked kitchen for snacks, or making plans for the company softball game later that night. Their office is faux-loft industrial chic: open floor plan, high ceilings, strategically exposed ductwork and plumbing. To all outward appearances, Endgame Inc. looks like the typical young tech startup.

It is anything but. Endgame is one of the leading players in the global cyber arms business. Among other things, it compiles and sells zero day information to governments and corporations. “Zero days,” as they’re known in the security business, are flaws in computer software that have never been disclosed and can be secretly exploited by an attacker. And judging by the prices Endgame has charged, business has been good. Marketing documents show that Endgame has charged up to $2.5 million for a zero day subscription package, which promises 25 exploits per year. For $1.5 million, customers have access to a database that shows the physical location and Internet addresses of hundreds of millions of vulnerable computers around the world. Armed with this intelligence, an Endgame customer could see where its own systems are vulnerable to attack and set up defenses. But it could also find computers to exploit. Those machines could be mined for data—such as government documents or corporate trade secrets—or attacked using malware. Endgame can decide whom it wants to do business with, but it doesn’t dictate how its customers use the information it sells, nor can it stop them from using it for illegal purposes, any more than Smith & Wesson can stop a gun buyer from using a firearm to commit a crime.

Endgame is one of a small but growing number of boutique cyber mercenaries that specialize in what security professionals euphemistically call “active defense.” It’s a somewhat misleading term, since this kind of defense doesn’t entail just erecting firewalls or installing antivirus software. It can also mean launching a pre-emptive or retaliatory strike. Endgame doesn’t conduct the attack, but the intelligence it provides can give clients the information they need to carry out their own strikes. It’s illegal for a company to launch a cyberattack, but not for a government agency. According to three sources familiar with Endgame’s business, nearly all of its customers are U.S. government agencies. According to security researchers and former government officials, one of Endgame’s biggest customers is the National Security Agency. The company is also known to sell to the CIA, Cyber Command, and the British intelligence services. But since 2013, executives have sought to grow the company’s commercial business and have struck deals with marquee technology companies and banks.

Endgame was founded in 2008 by Chris Rouland, a top-notch hacker who first came on the Defense Department’s radar in 1990—after he hacked into a Pentagon computer. Reportedly the United States declined to prosecute him in exchange for his working for the government. He started Endgame with a group of fellow hackers who worked as white-hat researchers for a company called Internet Security Systems, which was bought by IBM in 2006 for $1.3 billion. Technically, they were supposed to be defending their customers’ computers and networks. But the skills they learned and developed were interchangeable from offense.

Rouland, described by former colleagues as domineering and hot-tempered, has become a vocal proponent for letting companies launch counterattacks on individuals, groups, or even countries that attack them. “Eventually we need to enable corporations in this country to be able to fight back,” Rouland said during a panel discussion at a conference on ethics and international affairs in New York in September 2013.

Rouland stepped down as the CEO of Endgame in 2012, following embarrassing disclosures of the company’s internal marketing documents by the hacker group Anonymous. Endgame had tried to stay quiet and keep its name out of the press, and went so far as to take down its website. But Rouland provocatively resurfaced at the conference and, while emphasizing that he was speaking in his personal capacity, said American companies would never be free from cyberattack unless they retaliated. “There is no concept of deterrence today in cyber. It’s a global free-fire zone.”

by Shane Harris, Slate | Read more:
Image: Charlie Powell

The Uniform I’ve Chosen for Myself

Some people express themselves through their clothes; they stretch the parameters of office dress codes with unexpected cuts, vintage menswear, and statement jewelry. I admire those people, but I am not one of them. The last thing that I want to do on a weekday morning, or at anytime, honestly, is think about what to wear, but I still want to look good.

So I developed a uniform:
• A gray or black long-sleeved V-neck shirt.
• A gray or black A-line skirt (Brooks Brothers, via Rue La La.)
• A scarf, which hopefully disguises the fact that I’m wearing a T-shirt with no bra to work.
• Tights if it’s cold, fleece-lined tights if it’s really cold.
• Black heels.

In winter, I swap the gray and black V-neck shirts for gray and black V-neck sweaters that are thin enough to tuck in. If I need to be more formal, I add a black blazer. If I want to be more casual, I swap out the skirt and heels for black pants (ok, glorified leggings) and a pair of flats.

I picked this particular combination of clothing because it’s appropriate for my office, reasonably comfortable, and flattering for my skin tone and body type. Obviously, everyone’s ideal uniform will look different. My only advice, if you’re looking to create your own, is to stick with neutrals. People are less likely to notice that you’re wearing the same thing every day if it’s unmemorable.

That’s the main issue with wearing a uniform. For whatever reason, it’s considered socially unacceptable to wear the same thing every day, unless you have a job where it’s mandatory. Every time that I think that I’ve solved the problem of “workwear,” I immediately panic and wonder if my coworkers are talking about the fact that I only seem to own four pieces of clothing, even though I’m not in a fashion-related industry; even though I can’t remember what they wore yesterday, and they probably can’t remember what I wore either; even though I have a pile of identical shirts, so it’s not like I’m wearing the same thing day after day and it’s becoming increasingly sweaty and dirty and gross. Even though I don’t really want to live in a world where you’re judged for not owning enough clothes.

by Antonia Noori Farzan, The Billfold | Read more:
Image: Antonia Noori Farzan

Wednesday, November 12, 2014

Iggy Pop and The Stooges

Charles Sheffield

Gut–Brain Link Grabs Neuroscientists

Companies selling ‘probiotic’ foods have long claimed that cultivating the right gut bacteria can benefit mental well-being, but neuroscientists have generally been sceptical. Now there is hard evidence linking conditions such as autism and depression to the gut’s microbial residents, known as the microbiome. And neuroscientists are taking notice — not just of the clinical implications but also of what the link could mean for experimental design. (...)

Although correlations have been noted between the composition of the gut microbiome and behavioural conditions, especially autism, neuroscientists are only now starting to understand how gut bacteria may influence the brain. The immune system almost certainly plays a part, Mazmanian says, as does the vagus nerve, which connects the brain to the digestive tract. Bacterial waste products can also influence the brain — for example, at least two types of intestinal bacterium produce the neurotransmitter γ-aminobutyric acid (GABA).

The microbiome is likely to have its greatest impact on the brain early in life, says pharmacologist John Cryan at University College Cork in Ireland. In a study to be presented at the neuroscience meeting, his group found that mice born by caesarean section, which hosted different microbes from mice born vaginally, were significantly more anxious and had symptoms of depression. The animals’ inability to pick up their mothers’ vaginal microbes during birth — the first bacteria that they would normally encounter — may cause lifelong changes in mental health, he says. (...)

There are implications for basic research too. In another study to be presented at the meeting, veterinarian Catherine Hagan at the University of Missouri in Columbia compared the gut bacteria in laboratory mice of the same genetic strain that had been bought from different vendors. Their commensals differed widely, she found: mice from the Jackson Laboratory in Bar Harbor, Maine, for instance, had fewer bacterial types in their guts than did mice from Harlan Laboratories, which is headquartered in Indianapolis, Indiana.

Such differences could present a major complication for researchers seeking to reproduce another lab’s behavioural experiments, Hagan says. When her team transplanted bacteria from female Harlan mice into female Jackson mice, the animals became less anxious and had lower levels of stress-related chemicals in their blood. Hagan notes that when a lab makes a mouse by in vitrofertilization, the animal will pick up microbes from its surrogate mother, which might differ greatly from those of its genetic mother. “If we’re going to kill animals for research, we want to make sure they’re modelling what we think they’re modelling,” she says.

by Sara Reardon, Nature |  Read more:
Image: via:

Tuesday, November 11, 2014

The White House Gets It Right On Net Neutrality. Will the FCC?

[ed. Of course, the major telecoms hate it.]

Over the past year, millions of Internet users have spoken out in defense of the open Internet. Today, we know the White House heard us.

In a statement issued this morning, President Barack Obama has called on the Federal Communications Commission to develop new “net neutrality” rules and, equally importantly, establish the legal authority it needs to support those rules by reclassifying broadband service as a “telecommunications service.”

This is very welcomed news. Back in May, the Federal Communications Commission proposed flawed “net neutrality” rules that would effectively bless the creation of Internet “slow lanes.” After months of netroots protests, we learned the FCC began to settle on a “hybrid” proposal that, we fear, is legally unsustainable.

Here's why: if the FCC is going to craft and enforce clear and limited neutrality rules, it must first do one important thing. The FCC must reverse its 2002 decision to treat broadband as an “information service” rather than a “telecommunications service.” This is what’s known as Title II reclassification. According to the highest court to review the question, the rules that actually do what many of us want — such as forbidding discrimination against certain applications — require the FCC to treat access providers like “common carriers, ” treatment that can only be applied to telecommunications services. Having chosen to define broadband as an “information service,” the FCC can impose regulations that “promote competition” (good) but it cannot stop providers from giving their friends special access to Internet users (bad).

Today’s statement stresses a few key principles:
  1. No blocking. If a consumer requests access to a website or service, and the content is legal, your ISP should not be permitted to block it. That way, every player — not just those commercially affiliated with an ISP — gets a fair shot at your business.
  2. No throttling.
  3. Increased transparency, including with respect to interconnection.
  4. No paid prioritization. “No service should be stuck in a 'slow lane' because it does not pay a fee.”
Wisely, the statement explicitly notes the need for forbearance. As we have said for months, reclassification must be combined with a commitment to forbear from imposing aspects of Title II that were originally drafted for 20th century telephone services and that don't make sense for the Internet. While forbearance doesn’t set the limits on the regulatory agency in stone, it does require the FCC to make a public commitment that is difficult to reverse.

This is an important moment in the fight for the open Internet. President Obama has chosen to stand with the us: the users, the innovators, the creators who depend on an open internet.

by Corynne McSherry, Electronic Frontiers Foundation |  Read more:
Image: Mandel Ngan/AFP/Getty

Monday, November 10, 2014


Michel Lefebvre
via:

Into Nothingness


Dusk, that most beautiful moment
With no pattern.
Millions of images appear and disappear.
Beloved people.
How unbearable to die in the sky
.

Hours after writing these lines, the 24-year-old Tadao Hayashi fuelled a battered Mitsubishi A6M Zero and flew it towards an American aircraft carrier – and into nothingness. It was late July 1945. A few days later, the United States would drop atomic bombs on Hiroshima and Nagasaki. A war sold to the Japanese public as a struggle for national survival would be over.

In contemporary Western memory, still stocked for the most part by wartime propaganda imagery of mad, rodent-like Japanese, those final weeks are a swirl of brainwashed fanaticism, reaching its apotheosis as hundreds of kamikaze planes slammed into the US ships closing in around Japan’s home islands. Three thousand raids and innumerable scouting missions were launched during the climax of the conflict, designed to show the US the terrible cost it would pay for an all-out invasion of Japan.

Yet the vast majority of planes never made it to their attack or reconnaissance targets; they were lost instead at sea. And war’s end failed to yield the apocalyptic romance for which Japan’s leaders so fervently hoped. By late 1944 and early ’45, the only ‘life or death struggle’ was the routine misery to which the empire itself had reduced its soldiers and civilians. Conscripts were trained and goaded to fire their rifles into their own heads, to gather around an activated grenade, to charge into Allied machine-gun fire. Civilians jumped off cliffs, as Saipan and later Okinawa were taken by the Allies. Citizens of great cities such as Tokyo and Osaka had their buildings torn town and turned into ammunition.

Nor do clichés of unthinking ultranationalism fit the experiences of many kamikaze pilots. For each one willing to crash-dive the bridge of a US ship mouthing militarist one-liners, others lived and died less gloriously: cursing their leaders, rioting in their barracks or forcing their planes into the sea. A few took their senninbari – thousand-stitch sashes, each stitch sewn by a different well-wisher – and burned them in disgust. At least one pilot turned back on his final flight and strafed his commanding officers. (...)

One of the most ambitious schemes for a Japanese philosophy – where nothing by that name had existed before – was emerging at Hayashi’s own institution in 1943, just when he was forcibly removed from it. The great project of Kitarō Nishida, a seasoned Zen practitioner and the founder of what became the ‘Kyoto School’ of philosophy at Kyoto Imperial University, was to do what many Zen Buddhists insisted was impossible: to describe the picture of reality revealed in meditation.

Nishida sought to reverse the key premise of Western philosophy, writing not about ‘being’ or ‘what is’, but instead about ‘nothingness’. His was not the relative nothingness of non-being – the world of the gone-away, the not-yet or the might-be. He meant absolute nothingness: an unfathomable ‘place’ or horizon upon which both being and non-being arise.

To help students make sense of this idea, Nishida liked to draw a cluster of small circles on the lecture-hall board. This is how people usually see the world, he would say: a collection of objects, and judgments about those objects. Take a simple sentence: ‘The flower is yellow.’ We tend to focus on the flower, reinforcing in the process the idea that objects are somehow primary. But what if we turn it around, focusing instead on the quality of yellowness? What if we say to ourselves ‘the flower is yellow’, and allow ourselves to become perceptually engrossed in that yellowness? Something interesting happens: our concern with the ‘is-ness’ of the flower, and also the is-ness of ourselves, begins to recede. By making ‘yellowness’ the subject of our investigation – trying to complete the sentence ‘Yellowness is…’ – we end up thinking not in terms of substance, but in terms of place. The question isn’t so much ‘What is yellowness?’ as ‘Where is yellowness?’ Against what broader backdrop does ‘yellowness’ emerge?

For Nishida, the answer was a special sort of consciousness: not first-person reflection, where consciousness is the possession of an individual, but rather a consciousness that possesses people. It becomes less true to say that ‘an individual has experiences’ than that ‘experience has individuals’.

But if consciousness is the horizon beyond ‘yellow’, what is the further horizon? Where is consciousness? Nishida drew a dotted, all-encompassing line on the board. This, he said, is ‘absolute nothingness’, producing and interpenetrating every other plane of reality. Absolute nothingness is God. And God is absolute nothingness. (...)

The trouble was, as an idea, it had other sorts of potential too. The war was dragging on. Japan’s chances of winning – or even achieving a respectable peace – were fading. There is a fine line between understanding an idea such as ‘absolute nothingness’ and deploying it as a rationalisation, and it appears that Nishida and his colleagues crossed it – and encouraged their readers to do so, too. A relatively abstract set of ideas were allowed to take on potent political form.

by Christopher Harding, Aeon |  Read more:
Image: uncredited photo by Rex Features

Supreme Court Urged to Overturn API Copyrights Decision

The Electronic Frontier Foundation (EFF) filed a brief with the Supreme Court of the United States today, arguing on behalf of 77 computer scientists that the justices should review a disastrous appellate court decision finding that application programming interfaces (APIs) are copyrightable. That decision, handed down by the U.S. Court of Appeals for the Federal Circuit in May, up-ended decades of settled legal precedent and industry practice.

Signatories to the brief include five Turing Award winners, four National Medal of Technology winners, and numerous fellows of the Association for Computing Machinery, IEEE, and the American Academy of Arts and Sciences. The list also includes designers of computer systems and programming languages such as AppleScript, AWK, C++, Haskell, IBM S/360, Java, JavaScript, Lotus 1-2-3, MS-DOS, Python, Scala, SmallTalk, TCP/IP, Unix, and Wiki.

"The Federal Circuit's decision was wrong and dangerous for technological innovation," EFF Intellectual Property Director Corynne McSherry said. "Excluding APIs from copyright protection has been essential to the development of modern computers and the Internet. The ruling is bad law, and bad policy."

Generally speaking, APIs are specifications that allow programs to communicate with each other. So when you type a letter in a word processor, and hit the print command, you are using an API that lets the word processor talk to the printer driver, even though they were written by different people.

The brief explains that the freedom to re-implement and extend existing APIs has been the key to competition and progress in both hardware and software development. It made possible the emergence and success of many robust industries we now take for granted—for example, mainframes, PCs, and workstations/servers—by ensuring that competitors could challenge established players and advance the state of the art.

The litigation began several years ago when Oracle sued Google over its use of Java APIs in the Android OS. Google wrote its own implementation of the Java APIs, but, in order to allow developers to write their own programs for Android, Google's implementation used the same names, organization, and functionality as the Java APIs.

In May 2012, Judge William Alsup of the Northern District of California ruled that the Java APIs are not subject to copyright. The court understood that ruling otherwise would have allowed Oracle to tie up "a utilitarian and functional set of symbols" that provides the basis for so much of the innovation and collaboration we all rely on today. The Federal Circuit disagreed, holding that Java's API packages were copyrightable, although it sent the case back to the trial court to determine whether Google's copying was nonetheless a lawful fair use.

by Electronic Frontier Foundation |  Read more:
Image: astanush

Sunday, November 9, 2014


Eduardo Urculo Fernandez
via:

The Van Gogh Mystery

A lone figure tramps toward a field of golden wheat. He carries a canvas, an easel, a bag of paints, and a pained grimace. He sets up his kit and begins to paint furiously, rushing to capture the scene of the swirling wheat as a storm approaches. Murderous crows attack him. He flails them away. As the wind whips the wheat into a frenzy, he races to add the ominous clouds to his canvas. Then the threatening crows. When he looks up, his eyes bug out with madness. He goes to a tree and scribbles a note: “I am desperate. I see no way out.” Gritting his teeth in torment, he reaches into his pocket. Cut to a long shot of the wheat field churning in the storm. The sudden report of a gun startles a passing cart driver. The music swells. “The End” appears against a mosaic of famous paintings and a climactic crash of cymbals.

It’s a great scene, the stuff of legend: the death of the world’s most beloved artist, the Dutch painter Vincent van Gogh. Lust for Life was conceived in 1934 by the popular pseudo-biographer Irving Stone and captured on film in 1956 by the Oscar-winning director Vincente Minnelli, with the charismatic Kirk Douglas in the principal role.

There’s only one problem. It’s all bunk. Though eagerly embraced by a public in love with a handful of memorable images and spellbound by the thought of an artist who would cut off his own ear, Stone’s suicide yarn was based on bad history, bad psychology, and, as a definitive new expert analysis makes clear, bad forensics. (...)

Van Gogh himself wrote not a word about his final days. The film got it wrong: he left no suicide note—odd for a man who churned out letters so profligately. A piece of writing allegedly found in his clothes after he died turned out to be an early draft of his final letter to his brother Theo, which he posted the day of the shooting, July 27, 1890. That letter was upbeat—even ebullient—about the future. He had placed a large order for more paints only a few days before a bullet put a hole in his abdomen. Because the missile missed his vital organs, it took 29 agonizing hours to kill him.

None of the earliest accounts of the shooting—those written in the days immediately after the event—mentioned suicide. They said only that Van Gogh had “wounded himself.” Strangely, the townspeople of Auvers, the picturesque community near Paris where he stayed in the last months of his life, maintained a studied silence about the incident. At first, no one admitted having seen Van Gogh on his last, fateful outing, despite the summer crowding in the streets. No one knew where he would have gotten a gun; no one admitted to finding the gun afterward, or any of the other items he had taken with him (canvas, easel, paints, etc.). His deathbed doctors, an obstetrician and a homeopathist, could make no sense of his wounds.

And, anyway, what kind of a person, no matter how unbalanced, tries to kill himself with a shot to the midsection? And then, rather than finish himself off with a second shot, staggers a mile back to his room in agonizing pain from a bullet in his belly?

The chief purveyor of the suicide narrative was Van Gogh’s fellow artist Émile Bernard, who wrote the earliest version of artistic self-martyrdom in a letter to a critic whose favor he was currying. Two years earlier, he had tried the same trick when Van Gogh cut off part of his ear. Bernard spun a completely invented account of the event that thrust himself into the sensational tale. “My best friend, my dear Vincent, is mad,” he gushed to the same critic. “Since I have found out, I am almost mad myself.” Bernard was not present at the time of Vincent’s fatal shooting, but he did attend the funeral.

If later accounts are to be believed—and they often are not—the police briefly investigated the shooting. (No records survive.) The local gendarme who interviewed Vincent on his deathbed had to prompt him with the open question “Did you intend to commit suicide?” To which he answered (again, according to later accounts) with a puzzled equivocation: “I think so.”

That account, like almost all the other “early accounts” of Van Gogh’s botched suicide, rested mainly on the testimony of one person: Adeline Ravoux, the daughter of the owner of the Ravoux Inn, where Van Gogh was staying in Auvers, and where he died. Adeline was 13 at the time. She did not speak for the record until 1953. When she did, she mostly channeled the stories her father, Gustave, had told her half a century earlier. Her story changed constantly, developing dramatic shape, and even dialogue, with each telling.

Around the same time, another witness stepped forward. He was the son of Paul Gachet, the homeopathic doctor who had sat for a famous portrait by Van Gogh. Paul junior was 17 at the time of the shooting. He spent most of the rest of his life inflating his own and his father’s importance to the artist—and, not incidentally, the value of the paintings father and son had stripped from Vincent’s studio in the days after his death. It was Paul junior who introduced the idea that the shooting had taken place in the wheat fields outside Auvers. Even Theo’s son, Vincent (the painter’s namesake and godson), who founded the museum, dismissed Gachet Jr. as “highly unreliable.”

by Steven Neifeh, Vanity Fair |  Read more:
Image: Van Gogh

[ed. Deep thoughts.]
via:

Why You Should Not Have Broken Up With Me, According to Various Critical Theories


Deconstruction

Ferdinand de Saussure famously said, “In language there are only differences.” What he meant by this was that words have no meaning except insofar as they contrast with other words. Thus my failure to hold down a job for more than a month cannot implicitly carry the meaning of “failure” ascribed to it by you, Tandy. A word such as “unemployed” carries a semantic value only in terms of its partner word “employed,” just as “flat broke” defines itself relatively to “financially independent” and “manic-depressive” to “emotionally stable.” The noble goal of deconstruction is to overturn these simplistic oppositions and, in Derrida’s words, reject a “hierarchizing teleology” of language. In short, the deconstructionists certainly would not approve of my being compared to our more “successful” friends, such as Steven, who grew up in a wealthy household and whose job at the New Yorker is a clear-cut case of nepotism.

Marxism

Marx believed that the arc of history bends inevitably towards a more equitable distribution of the means of production, but that the battle for socialism would be a long one. I’m confident he would agree that my current financial straits are an inevitable result of the current socioeconomic moment, rather than “a permanent shitstorm born out of sheer laziness,” as you described it in your letter. In spite of your attending that Occupy rally last year, which I missed because I was hung over from drinking too much at your work party (you’re welcome for supporting you, BTW), you seem to have forgotten the socialist credo: “From each according to his ability, to each according to his need.” If you were ever incapable of making rent on your own, I certainly would have been willing to get a job in order to help out. But you always insisted on focusing on the negative; you had no trouble criticizing me when I couldn’t pay for dinner, but you never thanked me for going to the trouble of ordering it in the first place.

Structuralism

Structuralist readings of texts tend to collapse differences, seeing the underlying patterns and paradigms and ignoring surface variations. Viewed this way, our relationship is really no different from that of Romeo and Juliet. True, we did not overcome decades of internecine violence and the harsh judgments of our families in order to be together, but all of your friends did originally tell you not to date me, because of my criminal record and facial tattoos. To focus on the things that make us different from other couples—my request that you not look me in the eyes during meals or sex, the fact that I’ve yet to introduce you to my parents even though I still live with them, my insistence that you give up your cat for adoption because of my childhood attraction to Catwoman—is a failure of intellectual rigor on your part. Our relationship is all relationships, and don’t all relationships involve some amount of compromise and/or abandonment of one’s more physically attractive cats?

Existentialism

Life is meaningless, and any attempt to find connection through human relationship is doomed to failure. In other words, your insistence that I support you and validate your existence was misguided from the start. Also, your stubborn belief that it was “wrong” of me to send those late-night texts to your best friend, Sarah, posits a dualistic notion of good/bad that is belied by human experience. There is no such thing as morality, only authenticity (i.e. acting in accordance with one’s freedom). And there was nothing inauthentic in the way that I asked Sarah if she was “down to clown around on the town, Leroy Brown” (though her refusal on the grounds that she was your best friend reeked of bourgeois conformism).

by Tommy Wallach, McSweeny's |  Read more:
Image: via:

Hip Hops
via:

Bering Strait Theory Comes Crashing Down


For most of the 20th century, new discoveries of American Indian origins that cast doubt on the Bering Strait Theory were either dismissed or ignored. But as the technology of science marched on, the cracks grew deeper and deeper
.

An unintended consequence of the atmospheric testing of atomic weapons during the Cold War was that by the 1960s it had doubled the amount of radioactive carbon 14 in the environment, and this “bomb pulse” was showing up on the instruments that were used for radiocarbon dating. This led scientists to suspect that the amount of carbon 14 that is found in the environment might not have always been constant, possibly leading to wrong dates.

By the mid-1980s, dendrochronologists, those that study and date tree-rings, had manage to piece together–by matching the tree-rings of long-living species such as the bristlecone pine with those of ancient trees–an unbroken string of tree-rings over 7,000 years old. Since dendochronology can give extremely accurate dates, often to the year, matching the two dating systems found exactly that, that the amount of C14 fluctuated and that many radiocarbon dates had to be adjusted.

For Clovis First advocates, this presented a real problem, for the new calibrated radiocarbon dates pushed back the Clovis culture almost 2,000 years. It meant that the oldest reliably dated Clovis site, in Aubrey, Texas, which was radiocarbon dated at 11,590 years ago, was now approximately 13,490 years old. The Paleoindians would have had to race through the Ice-free Corridor to get to Texas in time.

But the new radiocarbon dates would give even more bad news. Geologists, also recalibrating their radiocarbon data, began to refine their estimates for when the massive ice sheets began to melt, and found them adjusting their dates between 500 and 2,000 years earlier. The Ice-free Corridor was now certainly impassable 13,000 years ago and possibly as late as 12,000 years ago. This meant that there was no way the Paleoindians could have walked over from Asia–or if they had, they would have had to done so 20,000 years earlier, a non-starter for the theory’s advocates. A central thesis of the Bering Strait Theory was now toppled, for if the Clovis culture was indeed the first peoples in the Americas, they had to have come by boat.

The use of boats had always been rejected by the Bering Strait advocates, because it opened up other possible routes of migration, such as Europe or Polynesia. Thus they had dismissed any contacts between Polynesians and American Indians (and many continue to dismiss evidence of prehistoric contacts), because it would undercut the contention that “primitive people” could not cross the oceans, and that walking across the Bering Strait was the only possible way that Paleoindians could have come to the Americas.

But the presumption that primitive people cannot sail the ocean is a belief born out of the social evolutionary theories of Herbert Spencer and Lewis Henry Morgan–that societies inexorably evolve to greater complexity and skill. Since the Europeans were unable to cross the oceans until the 16th-century, no one else should have been able to do so earlier. (...)

Many of Lang’s ideas were fanciful, but no more so than any one else’s at the time. He believed the Polynesians landed near Copiapo in Chile in some distant past and from there colonized the Americas. The historian George Bancroft (whose dubious accomplishments include instigating the Mexican War as acting Secretary of War under President James Polk), wrote about Lang’s theory in 1841 in his influential book, History of the Colonization of the United States,“It would not be safe to reject the possibility of an early communication between South America and the Polynesia world.” The distinguished French naturalist Jean Louis Armand de Quatrefages also considered American voyages likely in his 1866 work, The Polynesians and Their Migrations.

There was little doubt in those days that the Polynesians could have made a trans-Pacific voyage. The early settlement of Hawaii, more than 2,500 miles from the northernmost islands of French Polynesia and over 3,000 miles from Tahiti, required a tremendous feat of sailing and navigation. European explorers often recorded meeting Polynesian sailors in the open ocean, including an encounter in 1615 by the Dutch navigator, Willem Cornelisz Schouten, who came across a party of Polynesians in a double-hulled ship more than 3,000 miles from their home in the Marianas.

Lang noted physical and cultural similarities between the two peoples, many of which today would be seen as the result of simple prejudice, but others, such as similar types of fishhooks, canoes, and harpoons used by Indians in California, Chile, and among the Polynesians, were not to be dismissed lightly.

The most important evidence was biological. As early as 1770, Spanish explorers wrote that maize, manioc, and white potatoes, all indigenous to the Americas, had been grown on Easter Island. Similar varieties of coconuts, bottle gourd (calabash), bananas, and chickens, were all seen as evidence of voyages back and forth. Most significantly, the sweet potato, clearly indigenous to the Americas, was found across Polynesia, including Hawaii and New Zealand. In 1866, in the journalBotany, the German botanist Berthold Carl Seemann wrote that the Polynesian name for sweet potato, “Kumara or umara, of the South-Sea Islanders, is identical with cumar, the Quichua name for sweet potato in the highlands of Ecuador.”

by Alex Ewan, Indian Country Today |  Read more:
Image: world-mysteries.com

Saturday, November 8, 2014