Thursday, March 8, 2012

Nimbus



These stunning photos of indoor clouds might look like digital creations, but they're actually of real scenes created by Dutch artist Berndnaut Smilde.

The clouds are generated using a smoke machine, but Smilde must carefully monitor a room's humidity and atmosphere in order to get the smoke to hang so elegantly, and with such life-like form. Backlighting is used to bring out shadows from within the cloud, to give it that look of a looming and ominous rain cloud.

"I wanted to make the image of a typical Dutch raincloud inside a space," Smilde told Gizmag. "I'm interested in the ephemeral aspect of the work. It's there for a brief moment and then the cloud falls apart. The work only exists as a photograph."

by Bryan Nelson, MNN |  Read more:

"Someone I loved once gave me a box full of darkness. It took me years to understand that this too, was a gift."

- Mary Oliver

via:
Image: tejido

Two burger-obsessed Japanese writers tell us their likes, dislikes - and why General MacArthur would be the ideal dining companion.

Metropolis: What makes a good hamburger?

Yoshihide Matsubara: First, a good burger is one that has all the parts assembled with perfect balance and harmony. Many burgers in Japan are served with all the flavorings already included, so before adding ketchup and mustard, try it as it’s served to you first. Secondly, a good hamburger has a dignified appearance and is built up beautifully. In some traditional American diner-style shops, they serve the ingredients side-by-side on a flat plate. But I’m of the opinion that places which serve the hamburger in its complete form think precisely about the proper way to pile ingredients—the size of the patty, bun and vegetables and their order —and this really reflects the sensitive and delicate technique of the Japanese. Especially in downtown Tokyo, hamburgers have crossed over from being “American-style” to being simply a “delicious meal.” Their originality is evolving every day. Lastly, a good hamburger needs to be dynamic and hearty. After all, hamburgers are entertainment!

Metropolis: What makes a good hamburger?

Ken Saito: Of course, the actual taste of the hamburger is crucial, but what happens before you bite into the hamburger is equally important. Mainly, the appearance and the aroma are extremely important factors. In smaller restaurants, you can hear and smell the hamburger patty being cooked. A perfectly assembled hamburger is a work of art. It’s exciting to imagine the taste of the hamburger before you actually take a bite. When I pick up the hamburger and smell the charcoal aroma, I can sense that I’m in for a helluva ride.

via: Metropolis Tokyo |  Read more:
h/t YMFY

Pinterest and the Acquisitive Gaze


I signed up for Pinterest without really knowing what it was, out of a general sense that it is important to reserve a user name on any service that’s garnering attention. When I found that it was an image aggregator, I didn’t understand what the fuss was about. Why would I want to serve as a volunteer photo researcher? How is Pinterest any different from those Tumblrs set up to display a mosaic of images? Is it supposed to be a Twitter of images or something? I couldn’t imagine what I would use it for, so I sort of forgot about it.

But recently Pinterest has entered the mainstream, as a para-retailing apparatus presumed to appeal mainly to women. The site’s supposed femaleness has occasioned a lot of theorizing, some of which Nathan Jurgenson details in this post, as has its anodyne commerciality. Bon Stewart argues that Pinterest, since it discourages self-promotion and relies entirely on the appropriation of someone else’s creative expression, turns curation into passive consumerism; it allows for the construction and circulation of a bland sanitized “Stepford” identity. In other words, it becomes another tool for enhancing our digital brands at the expense of the possibility of an uncommodified self.

Give that emphasis on passive consumption, it’s not surprising that Pinterest has come to be associated with shopping fantasies. Pinterest’s great technological advance seems to be that it lets users shop for images over the sprawl of the internet, turning it into a endless visual shopping mall in which one never runs out of money. Chris Tackett suggests that sites like Pinterest are actually “anti-consumerist” because they allow people the instant gratification of choosing things without actually having to buy them. “Virtual consumerism means a real world reduction in wasteful consumption,” he writes, and that’s all well and good, though I’m not sure that making window shopping more convenient is in any way “anticonsumerist.” If anything that seems to reinforce the consumerist mentality while overcoming one of its main obstacles — people’s financial inability to perpetually shop. With Pinterest, they can at least simulate that experience, acquiring the images of things and associating them with themselves, appropriating the qualities the goods/images are thought to signify at that given moment. Pinterest allows for the purest expression of the Baudrillardian “passion for the code” “It is not the passion (whether of objects or subjects) for substances that speaks in fetishism, it is the passion for the code, which, by governing both objects and subjects, and by subordinating them to itself, delivers them up to abstract manipulation,” Baudrillard wrote in “Fetishism and Ideology“ that we’ve yet seen. We accumulate and sort images, trying to extract their assimilable essences, and in the process reduce ourselves to a similar image, a similar agglomeration of putative qualities that can be read out of a surface.

by Rob Horning, The New Inquiry |  Read more:

The Rosetta Stone is an ancient Egyptian granodiorite stele inscribed with a decree issued at Memphis in 196 BC on behalf of King Ptolemy V. The decree appears in three scripts: the upper text is Ancient Egyptian hieroglyphs, the middle portion Demotic script, and the lowest Ancient Greek. Because it presents essentially the same text in all three scripts (with some minor differences between them), it provided the key to the modern understanding of Egyptian hieroglyphs.

via: Wikipedia | Read more:

Shark Cartilage May Contain Toxin

Shark cartilage, which has been hyped as a cancer preventive and joint-health supplement, may contain a neurotoxin that has been linked with Alzheimer’s and Lou Gehrig’s disease.

Scientists at the University of Miami analyzed cartilage samples collected from seven species of sharks off the coast of Florida. The specimens all contained high levels of a compound called beta-methylamino-L-alanine, or BMAA, which has been linked to the development of neurodegenerative diseases. Sharks accumulate the compound because of their status at the top of the oceanic food chain, consuming fish and other sea creatures that feed on BMAA-containing algae. The small tissue samples were obtained from sharks that were caught, tagged and released for tracking research, and no sharks were harmed for the study.

The findings are important because of the growing popularity of supplements that contain cartilage from shark fins. The products are widely sold and remain popular with consumers who view them as cancer fighters or as a remedy for joint and bone problems. The notion that shark cartilage can prevent cancer grew largely from the popularity of the 1992 book “Sharks Don’t Get Cancer.”

Although a number of studies have discredited shark cartilage as a cancer fighter, supplement makers have nonetheless made bold claims. In 2000, two supplement makers settled a federal suit as a result of hyping shark cartilage and paid restitution to customers.

Although the Miami scientists didn’t examine shark cartilage supplements directly, their findings add further cause for concern about the popularity of shark fin supplements. In the study, published in the journal Marine Drugs, the researchers found levels of BMAA ranging from 144 to 1,836 nanograms per milligram of cartilage in seven shark species, including hammerhead, blacknose, nurse and bull sharks. The toxin initially is produced by bacteria in large algae blooms that are brought on by agricultural runoff and sewage pollution.

Earlier studies have suggested that BMAA may be common in the brains of people with degenerative diseases. One in 2009, for example, found that brain samples from people who died of Alzheimer’s or Lou Gehrig’s disease had BMAA levels as high as 256 ng/mg. The brains of control subjects who died of other causes had only trace amounts or none at all. While BMAA has never been definitively cited as a cause of degenerative diseases in humans, some scientists hypothesize that it may be a contributing factor.

by Anahad O'Connor, NY Times |  Read more:
Barbara Walton/European Pressphoto Agency

Wednesday, March 7, 2012

Lindsey Buckingham

We're Underestimating the Risk of Human Extinction


Unthinkable as it may be, humanity, every last person, could someday be wiped from the face of the Earth. We have learned to worry about asteroids and supervolcanoes, but the more-likely scenario, according to Nick Bostrom, a professor of philosophy at Oxford, is that we humans will destroy ourselves.

Bostrom, who directs Oxford's Future of Humanity Institute, has argued over the course of several papers that human extinction risks are poorly understood and, worse still, severely underestimated by society. Some of these existential risks are fairly well known, especially the natural ones. But others are obscure or even exotic. Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.

Despite his concerns about the risks posed to humans by technological progress, Bostrom is no luddite. In fact, he is a longtime advocate of transhumanism---the effort to improve the human condition, and even human nature itself, through technological means. In the long run he sees technology as a bridge, a bridge we humans must cross with great care, in order to reach new and better modes of being. In his work, Bostrom uses the tools of philosophy and mathematics, in particular probability theory, to try and determine how we as a species might achieve this safe passage. What follows is my conversation with Bostrom about some of the most interesting and worrying existential risks that humanity might encounter in the decades and centuries to come, and about what we can do to make sure we outlast them.

Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?


Bostrom: Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.

In the short term you don't seem especially worried about existential risks that originate in nature like asteroid strikes, supervolcanoes and so forth. Instead you have argued that the majority of future existential risks to humanity are anthropogenic, meaning that they arise from human activity. Nuclear war springs to mind as an obvious example of this kind of risk, but that's been with us for some time now. What are some of the more futuristic or counterintuitive ways that we might bring about our own extinction?

Bostrom: I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

by Ross Andersen, The Atlantic |  Read more:

Chema Madoz
via:

The Era of Small and Many


Earlier this year, my state’s governor asked if I’d give an after-lunch speech to some of his cabinet and other top officials who were in the middle of a retreat. It’s a useful discipline for writers and theorists to have to summarize books in half an hour, and to compete with excellent local ice cream. No use telling these guys how the world should be at some distant future moment when they’ll no longer be in office—instead, can you isolate themes broad enough to be of use to people working on subjects from food to energy to health care to banking to culture, and yet specific enough to help them choose among the options that politics daily throws up? Can you figure out a principle that might undergird a hundred different policies?

Or another way to say it: can you figure out which way history wants to head (since no politician can really fight the current) and suggest how we might surf that wave?

Here’s my answer: we’re moving, if we’re lucky, from the world of few and big to the world of small and many. We’ll either head there purposefully or we’ll be dragged kicking, but we’ve reached one of those moments when tides reverse.  (...)

Many of us get a preview of life in the age of small and many when we sit down at our computers each day. Fifteen years ago we still depended on a handful of TV networks and newspaper conglomerates to define our world for us; now we have a farmers’ market in ideas. We all add to the flow with each Facebook post, and we can find almost infinite sources of information. It’s reshaping the way we see the world—not, of course, without some trauma (from the hours wasted answering e-mail to the death of too much good, old-school journalism). All these transitions will be traumatic to one extent or another, since they are so very big. We’re reversing the trend of generations.

But the general direction seems to me increasingly clear. Health care? In place of a few huge, high-tech hospitals dispensing the most expensive care possible, all the data suggest we’d be healthier with lots of primary and preventive care from physicians’ assistants and nurse practitioners in our neighborhoods. Banking? Instead of putting more than half our assets in half a dozen money-center banks that devote themselves to baroque financial instruments, we need capital closer to home, where loan officers have some sense for gauging risk and need.

by Bill McKibben, Orion Magazine |  Read more:
Painting: Suzanne Stryk

The Mouse Trap


Mark Mattson knows a lot about mice and rats. He's fed them; he's bred them; he's cut their heads open with a scalpel. Over a brilliant 25-year career in neuroscience—one that's made him a Laboratory Chief at the National Institute on Aging, a professor of neuroscience at Johns Hopkins, a consultant to Alzheimer's nonprofits, and a leading scholar of degenerative brain conditions—Mattson has completed more than 500 original, peer-reviewed studies, using something on the order of 20,000 laboratory rodents. He's investigated the progression and prevention of age-related diseases in rats and mice of every kind: black ones and brown ones; agoutis and albinos; juveniles and adults; males and females. Still, he never quite noticed how fat they were—how bloated and sedentary and sickly—until a Tuesday afternoon in February 2007. That's the day it occurred to him, while giving a lecture at Emory University in Atlanta, that his animals were nothing less (and nothing more) than lazy little butterballs. His animals and everyone else's, too.

Mattson was lecturing on a research program that he'd been conducting since 1995, on whether a strict diet can help ward off brain damage and disease. He'd generated some dramatic data to back up the theory: If you put a rat on a limited feeding schedule—depriving it of food every other day—and then blocked off one of its cerebral arteries to induce a stroke, its brain damage would be greatly reduced. The same held for mice that had been engineered to develop something like Parkinson's disease: Take away their food, and their brains stayed healthier.

How would these findings apply to humans, asked someone in the audience. Should people skip meals, too? At 5-foot-7 and 125 pounds, Mattson looks like a meal-skipper, and he is one. Instead of having breakfast or lunch, he takes all his food over a period of a few hours each evening—a bowl of steamed cabbage, a bit of salmon, maybe some yogurt. It's not unlike the regime that appears to protect his lab animals from cancer, stroke, and neurodegenerative disease. "Why do we eat three meals a day?" he asks me over the phone, not waiting for an answer. "From my research, it's more like a social thing than something with a basis in our biology."

But Mattson wasn't so quick to prescribe his stern feeding schedule to the crowd in Atlanta. He had faith in his research on diet and the brain but was beginning to realize that it suffered from a major complication. It might well be the case that a mouse can be starved into good health—that a deprived and skinny brain is more robust than one that's well-fed. But there was another way to look at the data. Maybe it's not that limiting a mouse's food intake makes it healthy, he thought; it could be that not limiting a mouse's food makes it sick. Mattson's control animals—the rodents that were supposed to yield a normal response to stroke and Parkinson's—might have been overweight, and that would mean his baseline data were skewed.

"I began to realize that the ‘control’ animals used for research studies throughout the world are couch potatoes," he tells me. It's been shown that mice living under standard laboratory conditions eat more and grow bigger than their country cousins. At the National Institute on Aging, as at every major research center, the animals are grouped in plastic cages the size of large shoeboxes, topped with a wire lid and a food hopper that's never empty of pellets. This form of husbandry, known as ad libitum feeding, is cheap and convenient since animal technicians need only check the hoppers from time to time to make sure they haven’t run dry. Without toys or exercise wheels to distract them, the mice are left with nothing to do but eat and sleep—and then eat some more.

That such a lifestyle would make rodents unhealthy, and thus of limited use for research, may seem obvious, but the problem appears to be so flagrant and widespread that few scientists bother to consider it. Ad libitum feeding and lack of exercise are industry-standard for the massive rodent-breeding factories that ship out millions of lab mice and rats every year and fuel a $1.1-billion global business in living reagents for medical research. When Mattson made that point in Atlanta, and suggested that the control animals used in labs were sedentary and overweight as a rule, several in the audience gasped. His implication was clear: The basic tool of biomedicine—and its workhorse in the production of new drugs and other treatments—had been transformed into a shoddy, industrial product. Researchers in the United States and abroad were drawing the bulk of their conclusions about the nature of human disease—and about Nature itself—from an organism that's as divorced from its natural state as feedlot cattle or oven-stuffer chickens.

Mattson isn't much of a doomsayer in conversation. "I realized that this information should be communicated more widely," he says without inflection, of that tumultuous afternoon in Atlanta. In 2010, he co-authored a more extensive, but still measured, analysis of the problem for the Proceedings of the National Academy of Sciences. The paper, titled " 'Control' laboratory rodents are metabolically morbid: Why it matters," laid out the case for how a rodent obesity epidemic might be affecting human health.

Standard lab rats and lab mice are insulin-resistant, hypertensive, and short-lived, he and his co-authors explained. Having unlimited access to food makes the animals prone to cancer, type-2 diabetes, and renal failure; it alters their gene expression in substantial ways; and it leads to cognitive decline. And there's reason to believe that ragged and rundown rodents will respond differently—abnormally, even—to experimental drugs.

by Daniel Engber, Slate |  Read more:
Illustration by Rob Donnelly.

Friend Me, or Else


If you think privacy settings on your Facebook and Twitter accounts guarantee future employers or schools can't see your private posts, guess again.

Employers and colleges find the treasure-trove of personal information hiding behind password-protected accounts and privacy walls just too tempting, and some are demanding full access from job applicants and student athletes.

In Maryland, job seekers applying to the state's Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through wall posts, friends, photos and anything else that might be found behind the privacy wall.

Previously, applicants were asked to surrender their user name and password, but a complaint from the ACLU stopped that practice last year. While submitting to a Facebook review is voluntary, virtually all applicants agree to it out of a desire to score well in the interview, according Maryland ACLU legislative director Melissa Coretz Goemann.

Student-athletes in colleges around the country also are finding out they can no longer maintain privacy in Facebook communications because schools are requiring them to "friend" a coach or compliance officer, giving that person access to their “friends-only” posts. Schools are also turning to social media monitoring companies with names like UDilligence and Varsity Monitor for software packages that automate the task. The programs offer a "reputation scoreboard" to coaches and send "threat level" warnings about individual athletes to compliance officers.

A recent revision in the handbook at the University of North Carolina is typical:

"Each team must identify at least one coach or administrator who is responsible for having access to and regularly monitoring the content of team members’ social networking sites and postings,” it reads. "The athletics department also reserves the right to have other staff members monitor athletes’ posts."

All this scrutiny is too much for Bradley Shear, a Washington D.C.-lawyer who says both schools and employers are violating the First Amendment with demands for access to otherwise private social media content.

"I can't believe some people think it's OK to do this,” he said. “Maybe it's OK if you live in a totalitarian regime, but we still have a Constitution to protect us. It's not a far leap from reading people's Facebook posts to reading their email. ... As a society, where are we going to draw the line?"

by Bob Sullivan, MSNBC |  Read more:
Image via: Mashable

Tuesday, March 6, 2012


Fred Phleger / Robert Lopshire. Ann Can Fly.
Random House, 1959
via:

To Putter, Divine


As fantasies go, this one is beyond innocent, involving neither a bemuscled UPS man nor an indulgent yoga boot camp with my best friend in Tulum. In it, husband and child are elsewhere but safe, returning home soon but not imminently. I have an hour alone in the apartment, with which to do what I please. And what pleases me is to putter.

Puttering (or pottering or putzing, depending on your tribe) is an activity, if it can be properly called that, of desultory bliss. It’s a mental and physical wandering that helpfully repels any impulse to be genuinely productive. The word—which evokes other varieties of ineffectuality, such as spluttering and stuttering—has an archaic taint, bringing to mind biddies in their rose gardens and widowers with their coin collections. But modern life has elevated puttering to an aspirational necessity. Just as generations of working men yearned for their Sunday-morning tee time, now overbusy people choose to celebrate their individuated Sabbaths with a round of fussing about.

In a survey on the uses of free time among American women commissioned by Real Simple magazine and conducted by the Families and Work Institute, 71 percent said that they most like to spend their free time “just relaxing.” On a long list of leisure choices, it came in third, ahead of reading, watching TV, seeing friends, playing sports, or listening to music; tied with “going out for a special meal” and spending time with pets (which in any case is obligatory; the dog has to be walked); and behind only spending time with their kids or partner. Since respondents were given so many specific options to choose from, this “just relaxing” seems to encompass … nothing in particular. “It’s puttering,” Ellen Galinsky, president of the Families and Work Institute, theorizes. “It’s the abandonment of our forever to-do list. I’m so happy when I have time to just putter. I might look at the flowers and decide they need watering. I might think about calling my sister. I might think about calling my college roommate.” And then she might half-do something entirely different, before meandering to the next diversion.

Despite two generations of gains made by women in the workplace and a growing acknowledgment that men can work the dishwasher and clothes dryer, too, the Real Simple survey found that American women retain a ­Puritanical streak: They spend considerable amounts of their free time catching up on housework, even though they hate it, because they feel they cannot enjoy themselves properly until all their chores are done. It’s here that puttering finds its greatest value, as a way out of this self-defeating bind. It offers respite from the relentless obligations of a life in which even relaxing can feel like something that needs to be scheduled in, and it does so with a ruse. Putterers carry an aura of being importantly occupied (picking up, gardening, sorting coins) even as they’re doing nada. It’s the cover of busyness that creates the insulating bubble, for it shields you from the disapproval of onlookers—and even from yourself.

by Lisa Miller, New York Magazine
Photo: Alamy

vitreOus via: Flickr

A Better Brew

The rise of extreme beer.

Elephants, like many of us, enjoy a good malted beverage when they can get it. At least twice in the past ten years, herds in India have stumbled upon barrels of rice beer, drained them with their trunks, and gone on drunken rampages. (The first time, they trampled four villagers; the second time they uprooted a pylon and electrocuted themselves.) Howler monkeys, too, have a taste for things fermented. In Panama, they’ve been seen consuming overripe palm fruit at the rate of ten stiff drinks in twenty minutes. Even flies have a nose for alcohol. They home in on its scent to lay their eggs in ripening fruit, insuring their larvae a pleasant buzz. Fruit-fly brains, much like ours, are wired for inebriation.

The seductions of drink are wound deep within us. Which may explain why, two years ago, when John Gasparine was walking through a forest in southern Paraguay, his thoughts turned gradually to beer. Gasparine is a businessman from Baltimore. He owns a flooring company that uses sustainably harvested wood and he sometimes goes to South America to talk to suppliers. On the trip in question, he had noticed that the local wood-carvers often used a variety called palo santo, or holy wood. It was so heavy that it sank in water, so hard and oily that it was sometimes made into ball bearings or self-lubricating bushings. It smelled as sweet as sandalwood and was said to impart its fragrance to food and drink. The South Americans used it for salad bowls, serving utensils, maté goblets, and, in at least one case, wine barrels.

Gasparine wasn’t much of a wine drinker, but he had become something of a beer geek. (His thick eyebrows, rectangular glasses, and rapid-fire patter seem ideally suited to the parsing of obscure beverages.) A few years earlier, he’d discovered a bar in downtown Baltimore called Good Love that had several unusual beers on tap. The best, he thought, were from a place called Dogfish Head, in southern Delaware. The brewery’s motto was “Off-Centered Ales for Off-Centered People.” It made everything from elegant Belgian-style ales to experimental beers brewed with fresh oysters or arctic cloudberries. Gasparine decided to send a note to the owner, Sam Calagione. Dogfish was already aging some of its beer in oak barrels. Why not try something more aromatic, like palo santo?

Calagione was used to odd suggestions from customers. On Monday mornings, his brewery’s answering machine is sometimes full of rambling meditations from fans, in the grips of beery enlightenment at their local bar. But Gasparine’s idea was different. It spoke to Calagione’s own contradictory ambitions for Dogfish: to make beers so potent and unique that they couldn’t be judged by ordinary standards, and to win for them the prestige and premium prices usually reserved for fine wine. And so, a year later, Calagione sent Gasparine back to Paraguay with an order for forty-four hundred board feet of palo santo. “I told him to get a shitload,” he remembers. “We were going to build the biggest wooden barrel since the days of Prohibition.”

Gasparine, by then, had begun to have second thoughts. No lumbermill he knew had ever cut so much palo santo, and he wasn’t sure that any could. Bulnesia sarmientoi is a weedy, willowy tree, sometimes called ironwood. It’s difficult to get large boards out of it, and even small ones can dull a saw blade. Wood experts rate a species’ hardness on the Janka scale—a measure of how many pounds of force it takes to drive a half-inch steel ball halfway into a board. Yellow pine rates around seven hundred, oak twice as high. Palo santo hovers near forty-five hundred—three times as high as rock maple. It’s one of the two or three hardest woods in the world.

by Burkhard Bilger, New Yorker |  Read more:
Photograph by Martin Schoeller

via: