Sunday, January 19, 2014

How Japan Stood Up to Old Age

John Creighton Campbell, professor emeritus at the University of Michigan, has devoted much of his career to studying responses to ageing in Japan. He takes issue with some fellow academics who associate what has become known as Japan’s “hyper-ageing” with inevitable economic catastrophe, even civilisational collapse. One virtue of the ageing “crisis”, he says, is that it happens slowly and predictably, giving governments, labour markets and society in general time to adjust. By around 2017, the number – though not the proportion – of over-65s will actually stabilise, he says, meaning the costs associated with ageing will tend to level off.

As far back as the early 1960s, the government became aware of the imminent ageing problem and began to establish nursing homes and home helpers. In the 1970s, benefits for retirees were more than doubled and a system of virtually free healthcare for older people was established. In 1990, Japan introduced the “Gold Plan”, expanding long-term care services. Ten years later, it started to worry about how to pay for it, and imposed mandatory insurance for long-term care. All those over 40 are obliged to contribute. The scheme’s finances are augmented with a 50 per cent contribution from taxes and recipients are charged a co-payment on a means-tested basis. Even then, there have been financing problems and the government has had to scale back the level of services provided. Still, Campbell calls it “one of the broadest and most generous schemes in the world”.

As a result of these and other adaptations, he argues, Japan has struck a reasonable balance between providing care and controlling costs. Other countries, including Britain, have studied Japan closely for possible lessons. Of course, 15 years of deflation have left Japan’s overall finances in lousy shape, with a public debt-to-output ratio of 240 per cent, the highest in the world. Spending on healthcare per capita, however, is among the lowest of advanced nations, though outcomes are among the best.

by David Pilling, FT |  Read more:
Image: Tohiski Senoue

Saturday, January 18, 2014


Auguste Herbin (French, 1882-1960), Nature morte au melon, 1920-26.
via:

l’alia vojo (by La Stranga)
via:

Outsourcing Haiti

Across the country from Port-au-Prince, Haiti’s capital, miles of decrepit pot-holed streets give way to a smooth roadway leading up to the gates of the Caracol Industrial Park, but no further. The fishing hamlet of Caracol, from which the park gets its name, lies around the bend down a bumpy dirt road. Four years after the earthquake that destroyed the country on January 12, 2010, the Caracol Industrial Park is the flagship reconstruction project of the international community in Haiti. Signs adorn nearby roads, mostly in English, declaring the region “Open for Business.” In a dusty field, hundreds of empty, brightly colored houses are under construction in neat rows. If all goes as hoped for by the enthusiastic backers of the industrial park, this area could be home to as many as 300,000 additional residents over the next decade.

The plan for the Caracol Industrial Park project actually predates the 2010 earthquake. In 2009, Oxford University economist Paul Collier released a U.N.–sponsored report outlining a vision for Haiti’s economic future; it encouraged garment manufacturing as the way forward, noting U.S. legislation that gave Haitian textiles duty-free access to the U.S. market as well as “labour costs that are fully competitive with China . . . [due to] its poverty and relatively unregulated labour market.”

The report, embraced by the U.N. and the U.S., left a mark on many of the post-earthquake planning documents. Among the biggest champions of the plan were the Clintons, who played a crucial role in attracting a global player to Haiti. While on an official trip to South Korea as Secretary of State, Hillary Clinton brought company officials from one of the largest South Korean manufacturers to the U.S. embassy to sell them on the idea. U.N. Secretary General Ban Ki-moon, having just appointed Bill Clinton U.N. special envoy to Haiti, tapped connections in his home country, South Korea.

Then suddenly, the earthquake presented an opportunity for the Clintons and the U.N. to fast track their plans. The U.S. government and its premiere aid agency, USAID, formed an ambitious plan to build thousands of new homes, create new industries, and provide new beginnings for those who lost everything in the earthquake. Originally the plan was to build the industrial park near Port-au-Prince. But land was readily available in the North, and the hundreds of small farmers who had to be moved from the park’s site were far less resistant than the wealthy land-owners in the capital. So the whole project moved to the Northern Department, to Caracol. Under the banner of decentralization and economic growth, the Caracol Industrial Park, with the Korean textile manufacturer Sae-A as its anchor tenant, became the face of Haiti’s reconstruction.

Now, only 750 homes have been built near Caracol, and the only major tenant remains Sae-A. New ports and infrastructure have been delayed and plagued by cost overruns. Concerns over labor rights and low wages have muted the celebration of the 2,500 new jobs created. For those who watched pledges from international donors roll in after the earthquake, reaching a total of $10 billion, rebuilding Haiti seemed realistic. But nearly four years later, there is very little to show for all of the aid money that has been spent. Representative Edward Royce (R-CA), the chair of the House Foreign Affairs Committee, bluntly commented in October that “while much has been promised, little has been effectively delivered.”

The story of how this came to pass involves more than the problems of reconstruction in a poor country. While bad governance, corruption, incompetent bureaucracy, power struggles, and waste contributed to the ineffective use of aid, what happened in Haiti has more to do with the damage caused by putting political priorities before the needs of those on the ground.

by Jake Johnston, Boston Review |  Read more:
Image Jake Johnston

Cooking Tako (Octopus)

 


Octopus Demystified

[ed. My nephew's recipe: massage tako (octopus) with Hawaiian salt for about 5 min. to de-slime and soften, remove webbing, boil in salt water for about 1/2 hr., or long enough to slide a fork gently into the tenticles. Optional: grill the cooked tako for a minute or two to give a crisp and slightly smoky exterior.]

Friday, January 17, 2014

Dr. V’s Magical Putter

Strange stories can find you at strange times. Like when you’re battling insomnia and looking for tips on your short game.

It was well past midnight sometime last spring and I was still awake despite my best efforts. I hadn’t asked for those few extra hours of bleary consciousness, but I did try to do something useful with them.

I play golf. Sometimes poorly, sometimes less so. Like all golfers, I spend far too much time thinking of ways to play less poorly more often. That was the silver lining to my sleeplessness — it gave me more time to scour YouTube for tips on how to play better. And it was then, during one of those restless nights, that I first encountered Dr. Essay Anne Vanderbilt, known to friends as Dr. V.

She didn’t appear in the video. As I would later discover, it’s almost impossible to find a picture, let alone a moving image, of Dr. V on the Internet. Instead, I watched a clip of two men discussing the radical new idea she had brought to golf. Gary McCord did most of the talking. A tournament announcer for CBS with the mustache of a cartoon villain, McCord is one of the few golf figures recognizable to casual sports fans because he’s one of the few people who ever says anything interesting about the sport.

The video was shot in March of last year, when McCord was in California for an event on the Champions Tour, the 50-and-over circuit on which he occasionally plays. In it, he explained that he had helped Dr. V get access to the nearby putting green, where he said she was currently counseling a few players. She was an aeronautical physicist from MIT, he continued, and the woman who had “built that Yar putter with zero MOI.” The credentials were impressive, but the name “Yar” and the acronym were unfamiliar.

According to McCord, before building her putter Dr. V had gone back and reviewed all the patents associated with golf, eventually zeroing in on one filed in 1966 by Karsten Solheim. As the creator of Ping clubs, Solheim is the closest thing the game has to a lovable grandfather figure. He was an engineer at General Electric before becoming one of the world’s most famous club designers, and his greatest gift to the sport was his idea to shift the weight in a club’s face from the middle to its two poles. This innovation may sound simple, but at the time it was revolutionary enough to make Solheim one of the richest men in America and the inventor of one of the most copied club designs in history. In Dr. V’s estimation, however, Solheim was nothing but a hack. “The whole industry followed [that patent],” she told McCord. “You’re using pseudoscience from the ’50s in golf!”

As the video went on, McCord told the story of how he had arranged a meeting between Dr. V and an executive at TaylorMade, the most successful clubmaker in the world, whose products McCord also happened to endorse. The gist of that meeting: This previously unknown woman had marched up to one of the most powerful men in golf and told him that everything his company did was wrong. “She just hammered them on their designs,” McCord said. “Hammered them.”

I was only half-awake when I watched the clip, but even with a foggy brain I could grasp its significance. McCord is one of golf’s most candid talkers — his method of spiking the truth with a dash of humor famously cost him the chance to continue covering the Masters after the schoolmarms who run the tournament objected to his description of one green as so fast that it looked like it had been “bikini-waxed.” This respected figure was saying that this mysterious physicist had a valuable new idea. But the substance of that idea wasn’t yet clear — over time, I would come to find out that nothing about Dr. V was, and that discovery would eventually end in tragedy. That night, however, all I knew was that I wanted to know more.

by Caleb Hannan, Grantland |  Read more:
Image: uncredited

Photo: markk

Thursday, January 9, 2014

Break


via:
[ed. I'll be on a short break and back soon (wi-fi willing). Enjoy the archives.]

Wednesday, January 8, 2014


Hiroshige (?) 1854.
via:

The Internet of Things Is Wildly Insecure — And Often Unpatchable

[ed. See also: How the NSA Almost Killed the Internet. It seems the future of cloud computing and the so-called Internet of Things is vulnerable not only to hacking and NSA snooping, but political fracturing that could Balkanize large portions of the internet.]

We’re at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself — as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there’s no good way to patch them.

It’s not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them. Companies were trying to keep vulnerabilities secret, and not releasing security updates quickly. And when updates were released, it was hard — if not impossible — to get users to install them. This has changed over the past twenty years, due to a combination of full disclosure — publishing vulnerabilities to force companies to issue patches quicker — and automatic updates: automating the process of installing updates on users’ computers. The results aren’t perfect, but they’re much better than ever before.

But this time the problem is much worse, because the world is different: All of these devices are connected to the Internet. The computers in our routers and modems are much more powerful than the PCs of the mid-1990s, and the Internet of Things will put computers into all sorts of consumer devices. The industries producing these devices are even less capable of fixing the problem than the PC and software industries were.

If we don’t solve this soon, we’re in for a security disaster as hackers figure out that it’s easier to hack routers than computers. At a recent Def Con, a researcher looked at thirty home routers and broke into half of them — including some of the most popular and common brands.

To understand the problem, you need to understand the embedded systems market.

by Bruce Schneirer, Wired |  Read more:
Image: alengo/Getty Images

The Shape of Things to Come


On Alpine Road in Portola Valley, a few miles southwest of the campus of Stanford University, where the flat suburban landscape begins to give way to the vistas of the Santa Cruz Mountains, there is an old wooden roadhouse called the Alpine Inn, where college students drink beer and wine at old wooden tables carved with initials. It’s as if Mory’s, the venerable Yale hangout, were housed in a western frontier tavern out of a John Wayne movie. The locals, who call the place Zott’s, a contraction of Rossotti’s, the name of long-ago owners, claim it has the best hamburgers for miles around, but what makes the place notable isn’t what it serves. Affixed to the wall near the front door is a small bronze plaque that reads:

ON AUGUST 27, 1976, SCIENTISTS FROM SRI INTERNATIONAL CELEBRATED THE SUCCESSFUL COMPLETION OF TESTS BY SENDING AN ELECTRONIC MESSAGE FROM A COMPUTER SET UP AT A PICNIC TABLE BEHIND THE ALPINE INN. THE MESSAGE WAS SENT VIA A RADIO NETWORK TO SRI AND ON THROUGH A SECOND NETWORK, THE ARPANET, TO BOSTON. THIS EVENT MARKED THE BEGINNING OF THE INTERNET AGE.

That the world’s first e-mail was sent from a picnic table outside at Zott’s goes well with the rest of Silicon Valley lore, like the founding of Hewlett-Packard in one garage and Apple in another. It reminds you that for a long time the most striking thing about the appearance of Silicon Valley was how ordinary it was, how much it looked like everyplace else, or at least like every other collection of reasonably prosperous American suburbs, whatever may have been going on in its garages and whatever some geeks may have done over beers at Zott’s 37 years ago. Yes, Silicon Valley has Stanford, with its vast and beautiful campus, and some handsome mountain scenery marking its western edge, but the rest of the place has always been made up of neighborhoods and landmarks that could have been almost anywhere else, like the 101 Freeway and the strip malls and supermarkets and car dealerships and motels and low-rise office parks. Most of Silicon Valley is suburban sprawl, plain and simple, its main artery a wide boulevard called El Camino Real that might someday possess some degree of urban density but now could be on the outskirts of Phoenix. Zott’s is what passes for local color, but even this spirited roadhouse has a certain generic look to it. You could imagine it being almost anywhere out West, the same way that so much of Silicon Valley looks like generic suburbia.

And even after a few people began doing unusual things in their garages, and other people started inventing things in the university’s laboratories, and even after some of these turned into the beginnings of large corporations, some of which became successful beyond anyone’s imagination—even these things didn’t make Silicon Valley look all that different from everyplace else. The tech companies got bigger and bigger, but that has generally just meant that the sprawl sprawled farther. There was certainly nothing about the physical appearance of these few square miles that told you it was the place that had generated more wealth than anywhere else in our time.

Until now, that is. In June of 2011, four months before his death, Steve Jobs appeared before the City Council of Cupertino, where Apple’s headquarters are located. It was the last public appearance Jobs would make, and if it did not have quite the orchestrated panache of his carefully staged product unveilings in San Francisco, it was fixed even more on the future than the latest iPhone. Jobs was presenting the designs for a new headquarters building that Apple proposed to build, and that the City Council would have to approve. It was a structure unlike any other that his company, or any other in the world, had ever built: a glass building in the shape of a huge ring, 1,521 feet in diameter (or nearly five football fields), and its circumference would curve for nearly a mile. It was designed by Sir Norman Foster, the British-born architect known for the elegance of his work and for the uncompromising nature of his sleek, modern aesthetic—close to Jobs’s own. In a community that you could almost say has prided itself on its indifference to architecture, Apple, which had already changed the nature of consumer products, seemed now to want to try to do nothing less than change Silicon Valley’s view of what buildings should be.

That the proposed building was received with great enthusiasm was no surprise; a small suburban city like Cupertino is rarely going to stand in the way of whatever its largest taxpayer wants to do, and the building, after all, was one of Steve Jobs’s dying wishes. What was more surprising was that not long after Apple unveiled Foster’s audacious design, which it expects to start constructing soon and to occupy in 2016, Facebook decided that it, too, needed more space, and after searching several months for an architect, the company hired Frank Gehry, one of the few architects in the world who is even better known than Foster, and set him to work on a massive building of its own. Gehry’s Facebook building is intended in some ways to be the antithesis of Foster’s for Apple. It will be set lower into the ground and will be covered entirely by roof gardens: a building that will blend into the landscape rather than hover over it like an alien spacecraft. (From the minute the design became public, people have been calling the Apple building the “spaceship.”) But Facebook’s project is not exactly what you would call modest: underneath those gardens will be what might be the largest office in the world, a single room so gargantuan that it will accommodate up to 10,000 workers.

A few months after Facebook unveiled Gehry’s project, in the summer of 2012, Google, the biggest company of all, which until then had been operating solely out of existing buildings that it had renovated to suit its purposes, announced that it, too, was going to build something from scratch. Google had canceled a new building designed by the German architect Christoph Ingenhoven earlier that year, but after the Facebook announcement the company turned again to the idea of putting up a new building, as if it could not be left out of this latest form of Silicon Valley competition. In the architecture arms race, Google’s long-standing practice of taking over old suburban office buildings—and sometimes even entire office parks—scooping out their insides, and replacing them with lively, entertaining innards was no longer enough. Google hired NBBJ, a prominent Seattle-based firm—take that, Microsoft!—and set it to work on a new complex to add to the dozens of low-rise buildings it already occupies in the town of Mountain View.

All of this activity suggests that Silicon Valley now wants to grow up, at least architecturally. But it remains to be seen whether this wave of ambitious new construction will give the tech industry the same kind of impact on the built environment that it has had on almost every other aspect of modern life—or even whether these new projects will take Silicon Valley itself out of the realm of the conventional suburban landscape. One might hope that buildings and neighborhoods where the future is being shaped might reflect a similar sense of innovation. Even a little personality would be nice.

by Paul Goldberger, Vanity Fair |  Read more:
Image: Apple, Inc.

Tuesday, January 7, 2014

Even the New York Times Can't Resist Going Lowbrow with Native Advertising

[ed. See also: Zuckerberg wrestles with the same issue.]

One of the anomalies of digital journalism is a lack of clarity between high and low. That's the historic distinction in publishing, mass from class, the vulgar from the refined, tabloid from broadsheet, the penny press from papers costing a nickel.

You knew who you were by what you read. You were what you read.

For writers, writing for the New Yorker was not only a different experience, and different purpose, but actually imputed different meaning than writing for, say, the Reader's Digest, or the New York Post, or, for that matter, Time. Even the upper segment was segmented, each brand cultivating its form of elitism.

Now, in a sense, there is just Buzzfeed and its like, traffic magnet sites. Buzzfeed's editor, Ben Smith, is a credible journalist who now works in the middle of a random content stew, that, in another world, would have devalued his skills and undermined his career potential. But Smith, along with a whole generation of writers who exist outside of intellectual caste or conceit, is part of a flattened world, one in which there is only one real measure, traffic. And almost all traffic is low value. Hence the main job is getting more of it, and, if possible, to incrementally raise its value.

Which is why the New York Times now finds itself, grimly, and with the greatest self-pity, having to accept native advertising or branded content. In an achingly self-conscious memo, the Times publisher, Arthur Sulzberger (quite a big gun to announce a minor advertising development), tried to explain "our version of what is sometimes called 'native advertising' or 'branded content'" and why it would not offend Times readers or the Times' sensibility.

How advertising is handled has always been a key distinction between low and high order publishing. The higher you stood, the more separate you were from advertising, and, in the logic of snobbery, the greater a premium price the top brands would pay to be in your company. Whereas, lower order publishing (middle market newspapers, Sunday supplements, women's magazines, hobby magazines, trade magazines) has, traditionally, done pretty much anything that advertisers wanted.

In the new digital world of content disaggregation (where you find a single article through search engines or social media referral, rather than seeking out a particular brand) and traffic aggregation (in which ad networks and programmatic buying now deliver huge audiences largely disconnected from a brand), snobbery has less and less of a place.

The 'New York Times' traffic, or, for that matter the New Yorker's, does not trade at much of a premium to Buzzfeed or Gawker, both sites that are now earning incrementally greater advertising rates with native advertising programs.

Native advertising is a response by ad-supported content sites to deal with the fact that display advertising – clearly separated advertising units on a given page – yields ever-more discouraging response rates. The alternative, a common practice of lower order publishing, is to create advertising content that is easily confused with editorial content, in the hope of raising response rates.

Suffice it to say, it is easier for an advertiser to mimic the hodge-podge of Buzzfeed content than to mimic, say, the New Yorker's content. Again, the high and low divide. Indeed, many of the most successful new content sites –Buzzfeed, Huffington Post, Gawker, Business Insider, and Glam Media, among them – are such an amalgam of aggregated content, partnership sharing agreements, pay per click modules, user generated contributions, and, as well, the blitherings of novice journalists (sometimes heralded as a return to long form), that it's very hard, if not pointless, to separate real content from phony stuff. Hence, the Times' angst.

The Times' substantial investment in resources, quality controls, expertise, and exclusivity is now competing in a form better served by the opposite of those things.

by Micheal Wolff, The Guardian |  Read more:
Image: Richard Drew/AP

Fore!

[ed. Ehh...  not much worth posting today, so here's a classic. Makes me think of a hilarious Curb Your Enthusiasm episode when Larry decides to swipe a beloved 5-wood from the casket of his dead buddy at a funeral home and gets caught.]

On the par-3, 175-yard fourteenth hole at Riviera, I hit my tee shot a mere ninety yards and a physics-defying thirty degrees to the right—almost sideways. It’s a miracle I got my right leg out of the way, or I could have shattered it with the club. As I walked to the ball, I remarked to my friend that after seventeen years of playing this course I’d never seen someone hit a ball anywhere near where mine ended up. He had never seen it, either. “What’s more,” I said, “I couldn’t care less.” My friend was taken aback. But I meant it. I didn’t care, and I didn’t particularly care about the next shot, either. I felt liberated, not unlike the way I felt when my wife left me, except this time I didn’t take up skipping.

Finally, after years of pain and struggle, I had accepted the fact that I would never be a good golfer. No matter how many hours I practiced, no matter how many instructors I saw, how many books and magazines I read, or how many teaching aids I tried. Then it hit me. According to Dr. Elisabeth Kübler-Ross’s book “On Death and Dying,” Acceptance was the final stage of grief that terminal patients experience before dying, the others being Anger, Denial, Bargaining, and Depression. I was in the final stage! When I started thinking about it, I realized that I’d gone through every one of those stages, but not as a terminal patient . . . as a golfer.

My first stage: Anger. There was a time when I was always angry on the course. Driving fast in the cart. Throwing clubs. Constantly berating myself. “You stink, four-eyes! You stink at everything. You can’t even open a bottle of wine! You can’t swipe a credit card at the drugstore! You can’t swipe. And you’ve never even been to the Guggenheim. The Guggenheim! And call your parents, you selfish bastard!” Then I’d walk off the course and vow never to play again, only to return the following week for more of the same. I hardly ever finished a round. Once, I bought a brand-new set of clubs, and then, after a particularly terrible day, I gave them to the caddy at the sixteenth hole and left.

The Anger phase lasted for years, and then I entered the next phase, Denial. “All I need are some lessons,” I told myself. “Why should everyone else be able to do it and not me? Why are they good? I’m coördinated. I have a jump shot! I can go to my left. Obviously I have it in me. I have it in me! Next year, I’ll go to Orlando and spend a week taking lessons with Leadbetter. I don’t care what it costs. How can you spend a week with Leadbetter and not get better? It’s impossible.” But I did, and I didn’t.

by Larry David, New Yorker |  Read more:
Image:Vector_Golf by DaPino Webdesign

What's on Your Reading List?

Monday, January 6, 2014


via: lost link
[ed. Reminds me of somebody I used to know. Oh no... It's that song!]

Sears Roebuck Catalogue Assembly Line 1942.
via: