Thursday, January 22, 2015

Every LinkedIn Profile in a Nutshell

Introducing HoloLens

[ed. See also: This CNET Review]

I just had a 40-minute in-person demonstration of HoloLens, Microsoft's new computer headset, and I'm convinced that personal computing is on the verge of a major change.

In 10 years or so, people will be using head-mounted displays that project 3D images that you can interact with in actual space.

It's going to be a huge leap over the flat-screen computing that we've all become used to over the past 30 years. It's so much obviously better that once people try it, there will be no going back.

Augmented Versus Virtual

This was the second time in two months that I felt as if I were glancing into the future. The first was when I tried on the latest version of the Oculus Rift, Facebook's virtual-reality headset. It reminded me of that "wow" feeling I had the first time I tried an iPhone back in 2007.

HoloLens and Oculus are similar but distinct. Oculus Rift is virtual reality, which means the image seems to surround you entirely, and you don't see any part of the real world.

HoloLens is augmented reality, which means it projects images on top of the real world. (It doesn't really project holograms everybody can see — to see the images, you need to be wearing the headset or looking at a computer display of what the viewer is seeing.) The goggles, or glasses, are translucent. It's a little like Google Glass but with actual glass and much more immersive. (...)

Microsoft showed us a couple of key things, such as how to move the cursor around the virtual world (that's easy — you just move your head), and how to select using a particular finger gesture — you basically stick your finger straight up in the air as with one of those foam hands fans show at football games, then move the finger down and back up again.

Then we were ready to go. I tried three applications and got a demo of another person using a fourth one.

Skype

This was the most obviously useful and the easiest to understand, as it was an extension of a familiar application, Skype video calling.

For the demo, I was told I would be installing a light switch. (I've never done this.) I would use the Skype app on HoloLens to call our handy friend, Lloyd, who would walk me through how to do it.

Lloyd appeared in a little window. He could see everything I was looking at. (My field of vision would appear on the Surface app he was using back at his house.) He told me to look at the set of tools, then told me to pick up the voltage meter, the screwdriver, and so on. When he needed to, he could "draw" on the world in front of me — so, for instance, he drew a little diagram to show me which way to hold the light switch when I was attaching it to a couple of wires. If I wanted to have a clear field of vision, I could "pin" the little window with him in it, so it would stop following my field of vision around. (...)

In this way, he walked me through the installation in about five minutes. I succeeded! I wish I'd had this product last weekend, when I struggled to install some curtain rods into plaster in my house. (It took a couple of tries.)

This will apparently be a real app, and it will be available when HoloLens ships.

by Matt Rosoff, Business Insider |  Read more:

Wednesday, January 21, 2015

Hapa

Everything You Need to Know About DeflateGate

Last year was a big year for scandal in the NFL. There was actual football being played for the last five months, but between the awkward press conferences, scathing reports, various legal battles and doubling down on the same personal conduct policies that got Roger Goodell in trouble in the first place, the games have taken a back seat to the league's stumbling, mumbling and fumbling. It's only fitting that we now have the Patriots and DeflateGate (I prefer BallGhazi, personally) perched atop the news cycle less than two weeks ahead of Super Bowl 49.

Letting the air out of game balls isn't as serious as the incidents that rocked the NFL world last fall. But it is another round of bad news coming at a time the NFL usually reserves for hyping its biggest event of the year.

The Patriots are now under investigations over allegations that they intentionally deflated game balls. On Tuesday night, ESPN reported that 11 of the 12 balls set aside for the Patriots offense were found to be under inflated. Whether or not that was intentional and how it could have happened is what the NFL is now looking into, with the expectation of getting to the bottom of it by the end of this week.

With that, here's a run down of what we know so far and a closer look at the biggest questions about the incident.

Why deflate the game balls?

Sunday's game at Foxborough was rainy and windy. Wet footballs are harder to grip, thus more difficult to throw and catch. Letting some of the air out would make them more pliable and easier for a player to handle.
Pounds of pressure or weight of the ball?

By regulation, all NFL game balls are supposed to be inflated to a range between 12.5 and 13.5 pounds per square inch (PSI). When you hear reporters talking about 11 of 12 balls found to be under-inflated by as much as two pounds, that means the air pressure in the balls was as low as 10.5 PSI. A regulation NFL football itself only weighs between 14 and 15 ounces, less than a pound.

Could the cold temperatures have caused the balls to deflate?

Temperature can change air pressure, lowering it because the molecules that make up the gas are less active. However, the weather for the AFC Championship would have deflated the balls set aside for the Colts as well as the Patriots. Only the Patriots balls were found to be deflated.

Wouldn't the deflated balls help the Colts too?

No, because each team has their own balls for use when its offense is on the field.

Per NFL rules, each team has 12 balls they use on offense. The home team is also required to provide 12 more balls for backup, and visitors can bring 12 backup balls of their own if they so choose. In addition to those balls, Wilson, the company that manufactures NFL footballs, ships eight new balls directly to the officials for a game. Those are the kicking balls used by both teams, and they're kept under the control of the referees.

Why the NFL doesn't provide game balls and control them tighter than they do now is a question for another time.

by Ryan Van Bibber, SBNation |  Read more:
Image: Elsa/Getty Images

Eighty People are as Rich as Half the World

Eighty people hold the same amount of wealth as the world’s 3.6 billion poorest people, according to an analysis just released from Oxfam. The report from the global anti-poverty organization finds that since 2009, the wealth of those 80 richest has doubled in nominal terms — while the wealth of the poorest 50 percent of the world’s population has fallen.

To see how much wealth the richest 1 percent and the poorest 50 percent hold, Oxfam used research from Credit Suisse, a Swiss financial services company, and Forbes’s annual billionaires list. Oxfam then looked at how many of the world’s richest people would need to pool their resources to have as much wealth as the poorest 50 percent — and as of March 2014, it was just 80 people.

Four years earlier, 388 billionaires together held as much wealth as the poorest 50 percent of the world.

Thirty-five of the 80 richest people in the world are U.S. citizens, with combined wealth of $941 billion in 2014. Together in second place are Germany and Russia, with seven mega-rich individuals apiece. The entire list is dominated by one gender, though — 70 of the 80 richest people are men. And 68 of the people on the list are 50 or older.

by Mona Chalabi, FiveThirtyEight |  Read more:
Image: Farah Abdi Wasaweh/AP

School Reform Fails the Test

[ed. See also: Our Addiction to Testing]

During the first wave of what would become the 30-year school reform movement that shapes education policy to this day, I visited good public school classrooms across the United States, wanting to compare the rhetoric of reform, which tended to be abstract and focused on crisis, with the daily efforts of teachers and students who were making public education work.

I identified teachers, principals, and superintendents who knew about local schools; college professors who taught teachers; parents and community activists who were involved in education. What’s going on in your area that seems promising? I asked. What are teachers talking about? Who do parents hold in esteem? In all, I interviewed and often observed in action more than 60 teachers and 25 administrators in 30-some schools. I also met many students and parents from the communities I visited. What soon became evident—and is still true today—was an intellectual and social richness that was rarely discussed in the public sphere or in the media. I tried to capture this travelogue of educational achievement in a book published in 1995 called Possible Lives: The Promise of Education in America. Twenty years later, I want to consider school reform in light of the lessons learned during that journey, and relearned in later conversations with some of these same teachers. (...)

To update Possible Lives, I spoke to each of these teachers again about 10 years after my visit and found that all of them shared a deep concern about the potential effect of the federal No Child Left Behind Act of 2001 on the classrooms they had worked so hard to create. No Child Left Behind and the Obama administration’s 2009 Race to the Top initiative are built on the assumption that our public schools are in crisis, and that the best way to improve them is by using standardized tests (up to now only in reading and math) to rate student achievement and teacher effectiveness. Learning is defined as a rise in a standardized test score and teaching as the set of activities that lead to that score, with the curriculum tightly linked to the tests. This system demonstrates a technocratic neatness, but it doesn’t measure what goes on in the classrooms I visited. A teacher can prep students for a standardized test, get a bump in scores, and yet not be providing a very good education.

Organizing schools and creating curricula based on an assumption of wholesale failure make going to school a regimented and punitive experience. If we determine success primarily by a test score, we miss those considerable intellectual achievements that aren’t easily quantifiable. If we think about education largely in relation to economic competitiveness, then we ignore the social, moral, and aesthetic dimensions of teaching and learning. You will be hard pressed to find in federal education policy discussions of achievement that include curiosity, reflection, creativity, aesthetics, pleasure, or a willingness to take a chance, to blunder. Our understanding of teaching and learning, and of the intellectual and social development of children, becomes terribly narrow in the process. (...)

When the standardized test score is the measure of a teacher’s effectiveness, other indicators of competence are discounted. One factor is seniority—which reformers believe, not without reason, overly constrains an administrator’s hiring decisions. Another is post-baccalaureate degrees and certifications in education, a field many reformers hold in contempt. Several studies do report low correlation between experience (defined as years in the profession) and students’ test scores. Other studies find a similarly low correlation between students’ scores and teachers’ post-baccalaureate degrees and certifications. These studies lead to an absolute claim that neither experience nor schooling beyond the bachelor’s degree makes any difference.

What a remarkable assertion. Can you think of any other kind of work—from hair styling to neurosurgery—where we don’t value experience and training? If reformers had a better understanding of teaching, they might wonder whether something was amiss with the studies, which tend to deal in simple averages and define experience or training in crude ways. Experience, for example, is typically defined as years on the job, yet years in service, considered alone, don’t mean that much. A dictionary definition of experience—“activity that includes training, observation of practice, and personal participation and knowledge gained from this”—indicates the connection to competence. The teachers in Possible Lives had attended workshops and conferences, participated in professional networks, or taken classes. They experimented with their curricula and searched out ideas and materials to incorporate into their work. What people do with their time on the job becomes the foundation of expertise.

More generally, the qualities of good work—study and experimentation, the accumulation of knowledge, and refinement of skill—are thinly represented in descriptions of teacher quality, overshadowed by the simplified language of testing. In a similar vein, the long history of Western thought on education—from Plato to Septima Clark—is rarely if ever mentioned in the reform literature. History as well as experience and inquiry are replaced with a metric.

These attitudes toward experience are rooted in the technocratic-managerial ideology that drives many kinds of policy, from health care to urban planning to agriculture: the devaluing of local, craft, and experiential knowledge and the elevating of systems thinking, of finding the large economic, social, or organizational levers to pull in order to initiate change.

by Mike Rose, American Scholar |  Read more:
Image: David Herbick/Getty/istockphoto

Tuesday, January 20, 2015

Joe Jackson


Brenda Cablayan, Urban Sprawl
via:

Photo: markk

Shibata Zeshin, Mouse
via:

Rene Magritte | Metaphor
via:

Freeze Your Butt Off

The first time that I heard about cryotherapy, it was in conversation with a friend. "It’s that thing that all the models are doing where you freeze yourself," was her exact description. No other details provided. I immediately started to picture creepy chambers full of floating bodies. Don't they do that to dead people they're planning on bringing back once the science is solid enough? I had to know more.

As it turns out, cryotherapy is a whole lot less science fiction than my imagination made it out to be. Doctors have used it for years in physical therapy, and major athletes from Usain Bolt to Cristiano Ronaldo swear it improves performance and reduces injury recovery time.

On a basic level, cryotherapy is a process in which you subject the body to extreme cold for a short period of time in order to reduce inflammation. This makes it an excellent treatment for muscle soreness and joint swelling. Rather fortuitiously, according to practitioners, this also means that the treatment can boost metabolism, stimulate collagen production, increase endorphins, reduce cellulite, and improve energy. Whether or not I had any swollen joints at that moment, the rest of the side effects (or side perks, really) were all things I wanted. So, I found KryoLife, an NYC-based company offering whole-body cryotherapy treatments, and booked the next available appointment.

Walking into the office a few days later, I was greeted by KryoLife founders Joanna Fryben and Eduardo Bohorquez-Barona. They discreetly asked me if I would mind waiting a few minutes because Yoko Ono (!!!) was just finishing a treatment. Off to a great start.

When my turn came, and I shed my clothes and donned socks, a pair of wooden-soled clogs and some ultra-thick mittens. "Make sure to dry off any sweat," Joanna called into my dressing room. “You want to avoid frostbite!” Naturally that caused me to panic and I immediately broke out into a nervous sweat.

by Victoria Lewis, Into The Gloss |  Read more:
Image: Victoria Lewis

The Data Sublime

How did we come to believe the phone knows best? When cultural and economic historians look back on the early 21st century, they will be faced with the riddle of how, in little more than a decade, vast populations came to accept so much quantification and surveillance with so little overt coercion or economic reward. The consequences of this, from the Edward Snowden revelations to the transformation of urban governance, are plain, yet the cultural and psychic preconditions remain something of a mystery. What is going on when people hand over their thoughts, selves, sentiments, and bodies to a data grid that is incomprehensible to them?

The liberal philosophical tradition explains this sort of surrender in terms of conscious and deliberate trade-offs. Our autonomy is a piece of personal property that we can exchange for various guarantees. We accept various personal “costs” for certain political or economic “benefits.” For Thomas Hobbes, relinquishing the personal use of force and granting the state a monopoly on violence is a prerequisite to any legal rights at all: “Freedom” is traded for “security.” In more utilitarian traditions, autonomy is traded for some form of economic benefit, be it pleasure, money, or satisfaction. What both accounts share is the presumption that no set of power relations could persist if individuals could not reasonably consent to it.

Does that fit with the quantified, mass-surveilled society? It works fine as a post-hoc justification: “Yes,” the liberal will argue, “people sacrifice some autonomy, some privacy — but they only do so because they value convenience, efficiency, pleasure, or security even more highly.” This suggests, as per rational-choice theory, that social media and smart technologies, like the Google Now “dashboard” that constantly feeds the user information on fastest travel routes and relevant weather information in real time, are simply driving cost savings into everyday life, cutting out time-consuming processes and delivering outcomes more efficiently, much as e-government contractors once promised to do for the state. Dating apps, such as Tinder, pride themselves on allowing people to connect to those who are nearest and most desirable and to block out everyone else.

Leaving aside the unattractiveness of this as a vision of friendship, romance, or society, there are several other problems with it. First, it’s not clear that a utilitarian explanation works even on its own limited terms to justify our surrender to technology. It does not help people do what they want: Today, people hunt desperately for ways of escaping the grid of interactivity, precisely so as to get stuff done. Apps such as Freedom (which blocks all internet connectivity from a laptop) and Anti-Social (which blocks social media specifically) are sold as productivity-enhancing. The rise of “mindfulness,” “digital detox,” and sleep gurus in the contemporary business world testifies to this. Preserving human capital in full working order is something that now involves carefully managed forms of rest and meditation, away from the flickering of data.

Second, the assumption that if individuals do something uncoerced, then it was because it was worth doing rests on a tightly circular argument that assumes that the autonomous, calculating self precedes and transcends whatever social situation it finds itself in. Such a strong theory of the self is scarcely tenable in the context for which it was invented, namely, the market. The mere existence of advertising demonstrates that few businesses are prepared to rely on mathematical forces of supply and demand to determine how many of their goods are consumed. Outside the market realm, its descriptive power falls to pieces entirely, especially given “smart” environments designed to pre-empt decision-making.

The theory of the rational-calculating self has been under quiet but persistent attack within the field of economics since the 1970s, resulting in the development of behavioral economics and neuroeconomics. Rather than postulate that humans never make mistakes about what is in their best interest, these new fields use laboratory experiments, field experiments, and brain scanners to investigate exactly how good humans are at pursuing their self-interest (as economists define it, anyway). They have become a small industry for producing explanations of why we really behave as we do and what our brains are really doing behind our backs.

From a cultural perspective, behavioral economics and neuroeconomics are less interesting for their truth value (which, after all, would have surprised few behavioral psychologists of the past century) but their public and political reception. The fields have been met with predictable gripes from libertarians, who argue that the critique of individual rationality is an implicit sanction for the nanny state to act on our behalf. Nonetheless, celebrity behaviorists such as Robert Cialdini and Richard Thaler have found an enthusiastic audience, not only among marketers, managers, and policymakers who are professionally tasked with altering behavior, but also the nonfiction-reading public, tapping into a far more pervasive fascination with biological selfhood and a hunger for social explanations that relieve individuals of responsibility for their actions.

The establishment of a Behavioural Insights Team within the British government in 2010 (and since privatized) is a case in point of this surprising new appetite for nonliberal or postliberal theories of individual decision making. Set against the prosaic nature of the team’s actual achievements, which have mainly involved slightly faster processing of tax and paperwork, the level of intrigue that surrounds it, and the political uses of behaviorism in general, seems disproportionate. The unit attracted some state-phobic critiques, but these have been far outnumbered by a half-mystical, half-technocratic media fascination with the idea of policymakers manipulating individual decisions. This poses the question of whether behavior change from above is attractive not in spite of its alleged paternalism but because of it.

Likewise, the notorious Facebook experiment on “emotional contagion” was understandably controversial. But would it be implausible to suggest that people were also enchanted by it? Was there not also a mystical seduction at work, precisely because it suggested some higher power, invisible to the naked eye? We assume, rationally, that the presence of such a power is dangerous. But it is no contradiction to suggest that it might also be comforting or mesmerizing. To feel part of some grand technocratic plan, even one that is never made public, has the allure of immersing the self in a collective, in a manner that had seemed to have been left on the political scrapheap of the 20th century.

by William Davies, TNI |  Read more:
Image: uncredited

Mega-Project: Nicaragua’s Massive New Canal


Just north of Punta Gorda, the view of Nicaragua’s Miskito coast is much as Christopher Columbus would have seen it when he first sailed these waters more than five centuries ago. On the land, there is little sign of habitation among the forested cliff tops and pellucid bays. At sea, the only traffic is a small boat and a pod of half a dozen dolphins.

Our launch, however, is a 21st-century beast that leaps and crashes through the swells with bone-jarring, teeth-rattling thuds as we speed past this nature reserve and indigenous territory that is set to become the stage for a great many more noisy, polluting intrusions by the modern world.

If the dreams of Nicaraguan officials and Chinese businessmen are realised, this remote idyll will be transformed over the next five years into a hub of global trade – the easternmost point of a new canal linking the Atlantic and Pacific for supertankers and bulk carriers that are too big for the Panama canal.

In an era of breathtaking, earth-changing engineering projects, this has been billed as the biggest of them all. Three times as long and almost twice as deep as its rival in Panama, Nicaragua’s channel will require the removal of more than 4.5bn cubic metres of earth – enough to bury the entire island of Manhattan up to the 21st floor of the Empire State Building. It will also swamp the economy, society and environment of one of Latin America’s poorest and most sparsely populated countries. Senior officials compare the scale of change to that brought by the arrival of the first colonisers.


“It’s like when the Spanish came here, they brought a new culture. The same is coming with the canal,” said Manuel Coronel Kautz, the garrulous head of the canal authority. “It is very difficult to see what will happen later – just as it was difficult for the indigenous people to imagine what would happen when they saw the first [European] boats.”

For the native Americans, of course, that first glimpse of Spanish caravels was the beginning of an apocalypse. Columbus’s ships were soon followed by waves ofconquistadores whose feuding, disease and hunger for gold and slaves led to the annihilation of many indigenous populations.

The Nicaraguan government, by contrast, hopes the canal can finally achieve the Sandinista dream of eradicating poverty. In return for a concession to the Chinese company HKND, it hopes for billions of dollars of investment, tens of thousands of jobs and, eventually, a stable source of national income.

First, however, the project has to be built. Since the days of the first Spanish colonisers, there have been more than 70 proposals to construct a route across this stretch of the Central American isthmus. Blueprints have been sketched out by British, US and French engineers. Almost all have remained on the drawing board.

But this time work is already under way. The groundbreaking ceremony took place on 22 December. Over the next five years, engineers will build a 30-metre-deep, 178-mile, fenced waterway which, if finished (and there must always be doubts for a project of this size and cost), will change the lives of millions and the wildlife of a continent.

by Jonathan Watts, The Guardian | Read more:
Images: Fitzcorraldo and The Guardian

Monday, January 19, 2015


Photo: markk 

Hall and Oates

Seahawks 28, Packers 22


Watching the Packers at the end of that game was like every nightmare in which you arrive to class five minutes late, find out there’s a pop quiz, realize you haven’t put on pants, and discover the floor beneath you actually isn’t there, and you’re falling. Except in this case, they happen all at once. And they’re real, not dreams. And Marshawn Lynch is the monster at the end of your real life.

by Grantland Staff |  Read more: here and here
Image: Tom Pennington/Getty

Saturday, January 17, 2015


Tardigrades (also known as waterbears or moss piglets) are water-dwelling, segmented micro-animals, with eight legs.
via:

Japan’s Island Problem

[ed. See also: The Shape of Japan to Come.]

“Don’t get me wrong,” said Mr. Hasegawa, a fisherman. “I don’t think that the bombing of Hiroshima was a good thing.” Staring at the furious grey channel where the Pacific Ocean meets the Sea of Okhotsk off Hokkaido in northern Japan on a cold, clear day last March, he spoke like a trauma victim reliving the past: “But if the Americans had dropped the atomic bomb a month earlier, those islands out there would still be Japan’s.”

Were I unaware of the chronology of the summer of 1945 and had we been anywhere else, such a comment would make the engaging 64-year-old seem insensitive or odd. Yet on the horizon three miles in the distance were the snow-covered banks of one of Russia’s Kuril Islands, known to the Japanese who lived there until 1945 as Suishojima of the Habomai group. Mr. Hasegawa’s father was among the 17,291 Japanese who called it and several other nearby islands home. Admiral Yamamoto gathered his fleet there in 1941 to attack Pearl Harbor, and the region was once one of the three richest fishing grounds in the world, replete with salmon, herring, and cod.

In August 1945 wartime Emperor Hirohito announced Japan’s cataclysmic losses following America’s nuclear decimation of Hiroshima and Nagasaki, its firebombing of most other cities, and its devastation of Okinawa island in the East China Sea. Equally important, Russia had disavowed its neutrality pact with Japan, and Soviet troops were advancing into Japanese-controlled Manchuria, northern Korea, and a number of islands around Hokkaido. As the emperor told his defeated subjects with staggering understatement, “The war situation has developed not necessarily to Japan’s advantage.”

What parts of its massive empire Japan would forfeit were then unknown. In the coming years, an area that once resembled an enormous octopus spanning North China and the southern Pacific near Australia would be reduced to the seahorse-shaped nation that we are now familiar with. But this reality has yet to be accepted fully in Japan, especially among people like Mr. Hasegawa, whose lives were upended by history. They were left to imagine any number of alternate realities.

On September 2, 1945, Hirohito’s representatives signed surrender papers to American officers aboard the USS Missouri. At the same moment, Soviet soldiers overwhelmed the islands that the Japanese continue to call the Northern Territories (yet which are known internationally as the southern part of Russia’s Kuril Island chain). At the Yalta Conference in February 1945, Franklin Roosevelt promised these islands to Joseph Stalin in exchange for his troops’ entry into the war on the side of Allies. Within three days of the soldiers’ arrival on the southern Kurils, the Russians began to deport most of the Japanese to Hokkaido, although some were also taken to POW camps in Siberia. 20,000 Russians live on these islands today, and that, to paraphrase Vladimir Putin’s current mood, would appear to be that. Except, of course, for the evicted islanders and their descendants.

The peace treaty that ended war between Japan and the Allied Powers was signed in San Francisco in September 1951 and came into effect the following April. It dismantled Japan’s vast empire, returning the country largely to the shape it was in 1869, the year that Hokkaido became part of it. Whatever detractors say today, at the time Emperor Hirohito was pleased. On April 26, 1952, General Matthew Ridgway sent a telegram from Tokyo to the treaty’s chief architect in Washington, John Foster Dulles: “His Majesty the Emperor of Japan, on his own initiative, graciously called upon me this morning and personally expressed his gratitude … [for] making it possible for Japan to regain her sovereignty next Monday.” (...)

The internationally accepted map of Japan today dates from this moment in the early 1950s. The American negotiators involved in its creation excluded specific mention of the islands at the heart of each of Japan’s territorial disputes with Russia, China and Taiwan, and Korea. President Harry Truman’s special representative to the treaty process, John Foster Dulles, kept abundant correspondence, and his records along with those of other diplomats make clear that the final map would not fully commit to naming who owned what—for reasons ranging from real and perceived threats of Communist takeover of the entire area, including Japan, to a desire to cement the need for American power in the region. The Senate Foreign Relations Committee was displeased with this gamble, especially in terms of the islands Japan contests with Russia. On January 17, 1952, Senator Tom Connally wrote to Dulles that the formula was “vague and contained the germ of future conflicting claims.”

Over sixty years later, that germ has developed and spread: in addition to the conflict with Russia, there is the perilous standoff in the East China Sea over several steep crags known to the Japanese as the Senkaku and to the Chinese and Taiwanese as the Diaoyutai, and the caustic on-again, off-again slugfest with Korea over some rocks in the sea that the Japanese call Takeshima and the Koreans, Dokdo. (...)

In 2012 Japanese novelist Haruki Murakami criticized all sides in the dispute for getting people “drunk” on nationalism’s “cheap liquor.” At the time, the Japanese government had just upped the ante by purchasing the islands for $26 million from the family that had held them privately for decades. The already tense situation erupted into widespread anti-Japanese protests and boycotts throughout China, resulting in hundreds of millions of dollars in trade loss, and leading to Japanese, Chinese, and American warships patrolling the area. Following months of frigid relations, China declared an “Air Defense Identification Zone” in the skies above the islands in November 2013, matching what Japan had maintained for decades but generating new outrage because Beijing dictated its position unilaterally and issued unusually expansive demands. (...)

In 1945 the United States captured these islands in the Battle for Okinawa—known locally as the “Typhoon of Steel”—and then governed them together with the rest of Okinawa, its pilots using them for target practice. When Washington agreed to Okinawa’s sovereign reversion to Japan in 1972, it postponed decisions over who would have control over these rocks, recognizing Japan’s so-called administrative rights but not sovereignty. This remains the U.S. position today, regardless of Tokyo’s hard lobbying and Beijing’s bellicosity.

Oil and natural gas deposits near these islands were discovered in 1968, leading some to say that the fight is simply a resource struggle. Yet as recently as 2008, Japanese and Chinese companies established joint development guidelines. This draws attention to an additional dynamic at play that involves lingering historical animosities, distinct from the new laws of the sea but drawing dividing lines just as powerfully. (...)

The United States did not create these various island disputes, but as the victor in 1945, it drew expedient boundaries to contain a history of conflict, and those boundaries are showing their limits. History matters, of course. Yet the propensity to treat it like a backdrop to the present, rather than learning from it, has helped transform Northeast Asia’s legacies into contemporary tinderboxes.

by Alexis Dudden, Dissent | Read more:
Image: Al Jazeera English