Sunday, April 29, 2012

A Universe from Nothing?

Some of you may have been following a tiny brouhaha (“kerfuffle” is so overused, don’t you think?) that has sprung up around the question of why the universe exists. You can’t say we think small around here.

First Lawrence Krauss came out with a new book, A Universe From Nothing: Why There Is Something Rather Than Nothing (based in part on a popular YouTube lecture), which addresses this question from the point of view of a modern cosmologist. Then David Albert, speaking as a modern philosopher of science, came out with quite a negative review of the book in the New York Times. And discussion has gone back and forth since then: here’s Jerry Coyne (mostly siding with Albert), the Rutgers Philosophy of Cosmology blog (with interesting voices in the comments), a long interview with Krauss in the Atlantic, comments by Massimo Pigliucci, and another response by Krauss on the Scientific American site.

Executive summary

This is going to be kind of long, so here’s the upshot. Very roughly, there are two different kinds of questions lurking around the issue of “Why is there something rather than nothing?” One question is, within some framework of physical laws that is flexible enough to allow for the possible existence of either “stuff” or “no stuff” (where “stuff” might include space and time itself), why does the actual manifestation of reality seem to feature all this stuff? The other is, why do we have this particular framework of physical law, or even something called “physical law” at all? Lawrence (again, roughly) addresses the first question, and David cares about the second, and both sides expend a lot of energy insisting that their question is the “right” one rather than just admitting they are different questions. Nothing about modern physics explains why we have these laws rather than some totally different laws, although physicists sometimes talk that way — a mistake they might be able to avoid if they took philosophers more seriously. Then the discussion quickly degrades into name-calling and point-missing, which is unfortunate because these are smart people who agree about 95% of the interesting issues, and the chance for productive engagement diminishes considerably with each installment.

How the universe works

Let’s talk about the actual way physics works, as we understand it. Ever since Newton, the paradigm for fundamental physics has been the same, and includes three pieces. First, there is the “space of states”: basically, a list of all the possible configurations the universe could conceivably be in. Second, there is some particular state representing the universe at some time, typically taken to be the present. Third, there is some rule for saying how the universe evolves with time. You give me the universe now, the laws of physics say what it will become in the future. This way of thinking is just as true for quantum mechanics or general relativity or quantum field theory as it was for Newtonian mechanics or Maxwell’s electrodynamics.

Quantum mechanics, in particular, is a specific yet very versatile implementation of this scheme. (And quantum field theory is just a particular example of quantum mechanics, not an entirely new way of thinking.) The states are “wave functions,” and the collection of every possible wave function for some given system is “Hilbert space.” The nice thing about Hilbert space is that it’s a very restrictive set of possibilities (because it’s a vector space, for you experts); once you tell me how big it is (how many dimensions), you’ve specified your Hilbert space completely. This is in stark contrast with classical mechanics, where the space of states can get extraordinarily complicated. And then there is a little machine — “the Hamiltonian” — that tells you how to evolve from one state to another as time passes. Again, there aren’t really that many kinds of Hamiltonians you can have; once you write down a certain list of numbers (the energy eigenvalues, for you pesky experts) you are completely done.

We should be open-minded about what form the ultimate laws of physics will take, but almost all modern attempts to get at them take quantum mechanics for granted. That’s true for string theory and other approaches to quantum gravity — they might take very different views of what constitutes “spacetime” or “matter,” but very rarely do they muck about with the essentials of quantum mechanics. It’s certainly the case for all of the scenarios Lawrence considers in his book. Within this framework, specifying “the laws of physics” is just a matter of picking a Hilbert space (which is just a matter of specifying how big it is) and picking a Hamiltonian. One of the great things about quantum mechanics is how extremely restrictive it is; we don’t have a lot of room for creativity in choosing what kinds of laws of physics might exist. It seems like there’s a lot of creativity, because Hilbert space can be extremely big and the underlying simplicity of the Hamiltonian can be obscured by our (as subsets of the universe) complicated interactions with the rest of the world, but it’s always the same basic recipe.

So within that framework, what does it mean to talk about “a universe from nothing”? We still have to distinguish between two possibilities, but at least this two-element list exhausts all of them.

by Sean Carroll, Discover Magazine |  Read more:

China Digs It: How Beijing Cornered the Rare Earths Market

In September 2010, after Japan arrested a Chinese fishing boat captain in disputed waters in the East China Sea, Beijing allegedly retaliated by holding back shipments to Tokyo of rare earths, a group of 17 elements used in high-tech products. Arcane names such as cerium, dysprosium, and lanthanum -- elements that populate the bottom of the periodic table and whose unique properties make them ideal materials in the batteries that power iPhones and electric vehicles -- suddenly commanded global attention. It mattered little whether Beijing actually carried through with the threat (reports are murky), the damage was already done: The world had awoken to the fact that overreliance on China for rare-earths supplies could put the international high-tech supply chain at risk.

Today, China produces more than 90 percent of the global supply of rare earths but sits on just about one-third of the world's reserves of the elements -- with the rest scattered from the United States (13 percent) to Australia (5 percent). That was not always the case. A few decades ago, the United States led production, primarily through a large mine in California owned by the mining firm Molycorp. But as California's environmental regulations tightened in the 1990s, costs rose and profits declined, prompting the American industry eventually to shutter.

In the meantime, China started assuming the role of global supplier, spurred on by the Chinese patriarch Deng Xiaoping's supposed proclamation that "there is oil in the Middle East, but there are rare earths in China." In the last few decades, Chinese production of rare earths skyrocketed, more than offsetting declining production elsewhere. And consumers grew accustomed to what seemed to be a low-cost and reliable supplier in China.

Yet behind the façade of stability was an industry marked by mismanagement. First, a perceived abundance of the resources led to a general disregard for efficient and scalable production. In the early days of the Chinese rare-earths rush, preservation of resources was an afterthought, as private entrepreneurs, sensing a lucrative market, dove in. Many of these small-scale miners operated off the books and with little concern for environmental degradation. They were so numerous that the Chinese government could not keep track of them.

Even so, their efforts added up. Between 1990 and 2000, Chinese production of rare earths skyrocketed from just 16,000 tons to 73,000 tons. And in the decade since, China has essentially come to monopolize rare-earths mining. At its peak in 2009, China accounted for 129,000 of the 132,000 tons produced worldwide -- in other words, 97 percent of total global output. Meanwhile, it exported roughly 40-50 percent of what it produced.

Yet as demand for these raw materials rose, Beijing became increasingly unhappy that it was "selling gold to foreigners at the price of Chinese radishes," as one Chinese expression had it. Nationalistic voices in Chinese op-ed pages argued that China should create an OPEC-like rare-earths cartel or strategic reserve. Those calls were colored by an unsubstantiated belief among many Chinese that Japan was keeping just such a strategic reserve of its own, in which it had squirreled away 20 years' worth of rare earths that it had imported from China.

by Damien Ma, Foreign Affairs |  Read more:
Photo: Smelting lanthanum in Inner Mongolia. (David Gray / Courtesy Reuters)

louros:
Le piège. 35x35cm ; technique mixte sur toile
Lou Ros 2012
via:

My Mom Is My BFF

Amid the weeknight din of Ruby Foo’s, the hostess bore complimentary cocktails—a peace offering, for making Julie and ­Samantha wait twenty minutes for their reserved table. She offered one drink to Julie and one to … Wait a minute. Peering at Samantha, she said, “I want to give this to you, but …”

Samantha, who had just been listening to her mother describe what an “awful, awful slob” she was as a teenager, nodded toward Julie and said, “You should give it to her.”

At one point at Ruby Foo’s, it occurred to me that the hostess had made an honest calculation when settling on her genre of olive branch. The gift of pretty drinks assumed a friendship. The cocktails said, “Enjoy your girls’ night out!”

And from a distance anyone might’ve figured mother and daughter for pals. Samantha refrained from the typical teenage indicators of mother-induced misery. No mortified slumping, no glassy stare, no snapping, no sighing, no episodic glaring, no thumbing out one cell-phone SOS after another. And Julie? When Samantha spoke, Julie listened until her daughter had completed her thought. Which I assumed happened only in dreams and completely unrealistic movies.

Seriously, was there no discord? They assured me that there was. Sometimes they fought “all day long.”

Over what?

Hmm. “Clean up your room?” “Don’t make me clean my room? I like my stuffed animals on the floor. I’m comfortable with my stuffed animals on the floor. Let me be me!”

I watched them closely. Humans are only so good at hiding jealousies and tensions, even for short periods of time. We all come with our little tells, and mothers and daughters are human control panels of buttons waiting to be pushed. There’s not a teenager alive who hasn’t considered her mom intolerable and embarrassing, or pretended not to know her in public, but based on what I was seeing, it was possible to achieve the opposite.

Watching Julie and Samantha felt a little like seeing a fantasy come to life. My mom hasn’t let me finish a sentence since 1975. We have never shared clothes. We do not text. She often e-mails me, hilariously, in all-caps, because it’s easier than finding the uncap key. Neither she nor I have ever uttered the word sex in the other’s presence. In fact, I’m positive my mom has never spoken the word at all. I now understand all of that; her parenting approach was a generational mandate. But sometimes, as a pre-­Gilmore Girls teenager, I had this idea that mothers and daughters should walk arm-in-arm down leafy autumn roads wearing artfully knotted scarves, exchange gentle information on mean-girl management and boyfriends, and race home through the dappled sunlight to make cocoa. Once, in college, I tried to achieve this scenario with an aunt. It just felt weird.

Now mother-daughter BFFdom is a thing, having morphed its way onto the radar of sociologists, psychologists, ­authors, designers, marketers, and reality-show creators. The willingness to ­exploit one’s pubescent daughter for adult dating and fashion advice must be a Real ­Housewives casting prerequisite, and there’s no telling what the upcoming VH1 reality show Mama Drama will bring as it focuses on the turbo version of bestie mothers: “the partying parent who shares drinks, wardrobe, and social life with her daughter, and occasionally needs to be reminded that she’s the parent.”

Now that the phenomenon is here, it’s a little like watching the genie leave the bottle. You hope you’ve made the right wish.

by Paige Williams, New York Magazine |  Read more:
Photo: Gillian Laub

Yelp, You Cost Me $2000 by Suppressing Genuine Reviews, Here’s How You Fix It

Dear Yelp,

It’s highly likely that you’re costing your users millions of dollars by offering some astonishingly bad recommendations.

For example, I did business with a moving company based on 5 star recommendations that you presented.

As a result I was strong-armed into paying $2000 more than originally quoted. I spent 40 days without any furniture and quite a few of my belongings have been misplaced – forever.

I’ve always loved your site. I love your startup story. I love your crowd sourcing review model. For years now I’ve been using Yelp to help me make decisions about where to eat and what to purchase. Yelp has never steered me wrong. So what happened this time? How come your reviewers were so far off the mark?

They weren’t.

Your reviewers described exactly what I experienced and warned against this company again and again. But you hid all of those reviews.

Wait, what? Why would you do that?

After Googling the issue I found out that some time back you introduced a very difficult to notice and access filter link (screen shot here) to hide reviews that seem to be fake. You also introduced an automated algorithm that flags suspicious looking reviews and shuffles them into the filtered section.

Your algorithm typically hides entries by people who only post one review and who don’t otherwise engage in Yelp. Your assumption is that if a user only posts one review, posts no comments, has no friends etc. then most likely they are fake and trying to game the system.

Let’s call this “Assumption X”.

In the case of the company that I mention above (the one that ripped me off) Assumption X is exactly wrong at least 10 out of 14 times. Just to be clear, 10 honest one star reviews have been hidden from public view. That’s a 71% false positive hit rate.

So why did Yelp get it wrong 10 times?

In each case the one star review was left by someone who would never normally leave a review… they were simply so outraged that they were motivated to signup to Yelp and try to warn others how bad this company is. None of them ever used Yelp again. Furthermore, they didn’t have the knowledge or inclination to try to make their Yelp profile look acceptable to Yelp’s automated suppression systems.

by Justin Vincent, Building Stuff |  Read more:

MF Global: Will Anyone Ever go to Jail?


So the Senate Banking Committee is beginning hearings today on the MF Global scandal, hearings entitled, "The Collapse of MF Global: Lessons Learned and Policy Implications." Apparently the government has already moved to the reflective, introspective, South Park-ian, "You know, I learned something today!" stage in its examination of the scandal, despite the fact that the government’s official "response" hasn’t even started yet, i.e. authorities have yet to arrest a single person in this brazen billion-dollar theft story.  (...)

Nobody disputes the fact that MF Global officials dipped into customer accounts and took over $1.6 billion of customer money. We not only know that company officials reached into customer accounts, we know they brazenly lied to bondholders, ratings agencies and investors about the firm's financial condition ("MF Global's capital and liquidity has never been stronger," wrote the CFO of MF Global’s holding company, on the same day Moody’s downgraded it to junk status).

We even know that eighteen days before the firm went bust, company officers discussed how quickly to return money to customers, and even contemplated, in writing, the possibility of not returning the money right away. This is from a risk-assessment document prepared by company officers entitled "Break the Glass":
…Who do we want to be after the storm? How quickly do we want to send cash back to clients, what is the message if we do not send immediately, what is the strategy if we want to keep the customer and wait until the storm passes?
In the wake of the 2008 crash it’s often been said that one of the major problems in getting the public to grasp the crimes committed by banks and financial companies is the extreme complexity of the transactions used. The mortgage-backed-securities scam by itself was really just a common fraud scheme, but it was cloaked in the extremely complex verbiage and advanced math of derivatives transactions, which made it possible for bankers to bluff their way through an argument that no crimes had been committed.

But MF Global is different. This is not complicated at all. This is just stealing. You owe money, you don’t have the cash to cover it, and so you take money belonging to someone else to cover your debts. There’s no room at all here for an argument that this money was just lost due to a bad investment, an erroneous calculation based on someone's poor understanding of a complex transaction, etc. It’s straight-up embezzlement.

Nonetheless, there’s been an intense effort at trying to convince the public that no crime has been committed. Whoever is handling MF Global’s P.R. (according to Pam Martens in this excellent piece, it’s APCO worldwide, a former Big Tobacco spin factory) appears to have convinced the company’s officers to emphasize the word “chaos” in describing the last days of the firm – as though $1.2 billion wasn’t intentionally stolen, per se, but simply lost in a kind of uncontrolled whirlwind of transactions that magically carried the money out of accounts off to worlds unknown.

I call this the “Wizard of Oz” defense: a Big Twister hit the firm’s customer accounts, chaos ensued, and when the dust settled, no one knew where the heck little Dorothy and her money had gone.

by Matt Taibbi, Rolling Stone |  Read more:
Photo: Chip Somodevilla/Getty Images

Alfred Wertheimer: Hold Me Tight (1956) 
via:

Saturday, April 28, 2012

Most Likely to Succeed

Predicting success in football and teaching.

One of the most important tools in contemporary educational research is “value added” analysis. It uses standardized test scores to look at how much the academic performance of students in a given teacher’s classroom changes between the beginning and the end of the school year. Suppose that Mrs. Brown and Mr. Smith both teach a classroom of third graders who score at the fiftieth percentile on math and reading tests on the first day of school, in September. When the students are retested, in June, Mrs. Brown’s class scores at the seventieth percentile, while Mr. Smith’s students have fallen to the fortieth percentile. That change in the students’ rankings, value-added theory says, is a meaningful indicator of how much more effective Mrs. Brown is as a teacher than Mr. Smith.

It’s only a crude measure, of course. A teacher is not solely responsible for how much is learned in a classroom, and not everything of value that a teacher imparts to his or her students can be captured on a standardized test. Nonetheless, if you follow Brown and Smith for three or four years, their effect on their students’ test scores starts to become predictable: with enough data, it is possible to identify who the very good teachers are and who the very poor teachers are. What’s more—and this is the finding that has galvanized the educational world—the difference between good teachers and poor teachers turns out to be vast.

Eric Hanushek, an economist at Stanford, estimates that the students of a very bad teacher will learn, on average, half a year’s worth of material in one school year. The students in the class of a very good teacher will learn a year and a half’s worth of material. That difference amounts to a year’s worth of learning in a single year. Teacher effects dwarf school effects: your child is actually better off in a “bad” school with an excellent teacher than in an excellent school with a bad teacher. Teacher effects are also much stronger than class-size effects. You’d have to cut the average class almost in half to get the same boost that you’d get if you switched from an average teacher to a teacher in the eighty-fifth percentile. And remember that a good teacher costs as much as an average one, whereas halving class size would require that you build twice as many classrooms and hire twice as many teachers.

Hanushek recently did a back-of-the-envelope calculation about what even a rudimentary focus on teacher quality could mean for the United States. If you rank the countries of the world in terms of the academic performance of their schoolchildren, the U.S. is just below average, half a standard deviation below a clump of relatively high-performing countries like Canada and Belgium. According to Hanushek, the U.S. could close that gap simply by replacing the bottom six per cent to ten per cent of public-school teachers with teachers of average quality. After years of worrying about issues like school funding levels, class size, and curriculum design, many reformers have come to the conclusion that nothing matters more than finding people with the potential to be great teachers. But there’s a hitch: no one knows what a person with the potential to be a great teacher looks like. The school system has a quarterback problem.

by Malcolm Gladwell, The New Yorker |  Read more:
Illustration: Joost Swarte

Leonard Cohen


[ed. An abbreviated, but still very nice version of Hallelujah (for reasons known only to Mr. Cohen). Also. listen to Jeff Buckley's cover of the full song.]

Lyrics

The Limits to Environmentalism

[ed. If you find this article interesting be sure to read the comments section. I'm a firm believer that economic growth and good environmental stewardship are quite compatible, if you start with good design. Many degraded environments can be restored simply by correcting elements of bad design already in place. In other words, want new solutions? Stop creating (debating and accepting) old problems. As one commenter notes: "One of the places that environmentalism would be unrecognizable vis-a-vis the 70s is in architecture, landscape architecture, and urban planning. Perhaps landscape architecture especially – all fields concerned with modernity and technology. Today, there is a dynamic and creative embrace of technology in these fields." (see previous post on Communities for People)]

If you were cryogenically frozen in the early 1970s, like Woody Allen was in Sleeper, and brought back to life today, you would obviously find much changed about the world.

Except environmentalism and its underlying precepts. That would be a familiar and quaint relic. You would wake up from your Rip Van Winkle period and everything around you would be different, except the green movement. It’s still anti-nuclear, anti-technology, anti-industrial civilization. It still talks in mushy metaphors from the Aquarius age, cooing over Mother Earth and the Balance of Nature. And most of all, environmentalists are still acting like Old Testament prophets, warning of a plague of environmental ills about to rain down on humanity.

For example, you may have heard that a bunch of scientists produced a landmark report that concludes the earth is destined for ecological collapse, unless global population and consumption rates are restrained. No, I’m not talking about the UK’s just-published Royal Society report, which, among other things, recommends that developed countries put a brake on economic growth. I’m talking about that other landmark report from 1972, the one that became a totem of the environmental movement.

I mention the 40-year old Limits to Growth book in connection with the new Royal Society report not just to point up their Malthusian similarities (which Mark Lynas flags here), but also to demonstrate what a time warp the collective environmental mindset is stuck in. Even some British greens have recoiled in disgust at the outdated assumptions underlying the Royal Society’s report. Chris Goodall, author of Ten Technologies to Save the Planet, told the Guardian: “What an astonishingly weak, cliché ridden report this is…’Consumption’ to blame for all our problems? Growth is evil? A rich economy with technological advances is needed for radical decarbonisation. I do wish scientists would stop using their hatred of capitalism as an argument for cutting consumption.”

Goodall, it turns out, is exactly the kind of greenie (along with Lynas) I had in mind when I argued last week that only forward thinking modernists could save environmentalism from being consigned to junkshop irrelevance. I juxtaposed today’s green modernist with the backward thinking “green traditionalist,” who I said remained wedded to environmentalism’s doom and gloom narrative and resistant to the notion that economic growth was good for the planet. Modernists, I wrote, offered the more viable blueprint for sustainability:

“Pro-technology, pro-city, pro-growth, the green modernist has emerged in recent years to advance an alternative vision for the future. His mission is to remake environmentalism: Strip it of outdated mythologies and dogmas, make it less apocalyptic and more optimistic, broaden its constituency. In this vision, the Anthropocene is not something to rail against, but to embrace. It is about welcoming that world, not dreading it. It is about creating a future that environmentalists will help shape for the better.”

by Keith Kloor, Discover Magazine |  Read more:

Earth to Ben Bernanke

When the financial crisis struck in 2008, many economists took comfort in at least one aspect of the situation: the best possible person, Ben Bernanke, was in place as chairman of the Federal Reserve.

Bernanke was and is a fine economist. More than that, before joining the Fed, he wrote extensively, in academic studies of both the Great Depression and modern Japan, about the exact problems he would confront at the end of 2008. He argued forcefully for an aggressive response, castigating the Bank of Japan, the Fed’s counterpart, for its passivity. Presumably, the Fed under his leadership would be different.

Instead, while the Fed went to great lengths to rescue the financial system, it has done far less to rescue workers. The U.S. economy remains deeply depressed, with long-term unemployment in particular still disastrously high, a point Bernanke himself has recently emphasized. Yet the Fed isn’t taking strong action to rectify the situation.

The Bernanke Conundrum — the divergence between what Professor Bernanke advocated and what Chairman Bernanke has actually done — can be reconciled in a few possible ways. Maybe Professor Bernanke was wrong, and there’s nothing more a policy maker in this situation can do. Maybe politics are the impediment, and Chairman Bernanke has been forced to hide his inner professor. Or maybe the onetime academic has been assimilated by the Fed Borg and turned into a conventional central banker. Whichever account you prefer, however, the fact is that the Fed isn’t doing the job many economists expected it to do, and a result is mass suffering for American workers.

What the Fed Can Do

The Federal Reserve has a dual mandate: price stability and maximum employment. It normally tries to meet these goals by moving short-term interest rates, which it can do by adding to or subtracting from bank reserves. If the economy is weak and inflation is low, the Fed cuts rates; this makes borrowing attractive, stimulates private spending and, if all goes well, leads to economic recovery. If the economy is strong and inflation is a threat, the Fed raises rates; this discourages borrowing and spending, and the economy cools off.

Right now, the Fed believes that it’s facing a weak economy and subdued inflation, a situation in which it would ordinarily cut interest rates. The problem is that rates can’t be cut further. When the recession began in 2007, the Fed started slashing short-term interest rates until November 2008, when they bottomed out near zero, where they remain to this day. And that was as far as the Fed could go, because (some narrow technical exceptions aside) interest rates can’t go lower. Investors won’t buy bonds if they can get a better return simply by putting a bunch of $100 bills in a safe. In other words, the Fed hit what’s known in economic jargon as the zero lower bound (or, alternatively, became stuck in a liquidity trap). The tool the Fed usually fights recessions with had reached the limits of its usefulness.

by Paul Krugman, NY Times |  Read more:
Illustration by Kelsey Dake
 Industrial Farming. Almería Province, Spain

On the arid plains of southern Spain, produce is grown under the world's largest array of greenhouses and trucked north. Greenhouses use water and nutrients efficiently and produce all year—tomatoes in winter, for instance. But globally the challenge is grain and meat, not tomatoes. It takes 38 percent of Earth's ice-free surface to feed seven billion people today, and two billion more are expected by 2050.

From the essay: Enter the Anthropocene - Age of Man

Photo: Edward Burtynsky
via: National Geographic

Friday, April 27, 2012

Talking Heads


Lost on the Gene Map

A tiny dot of DNA, thousands of times smaller than a pinhead, exists in almost every cell of our bodies. Stored in its tightly wound double helix is the wisdom of nearly four billion years of evolution — the hereditary information that decides our hair colour, whether we might stutter, or if we have the potential to win an Olympic gold medal. Human DNA is typically divided into forty-six chromosomes, twenty-three inherited from each parent; the DNA on one chromosome includes hundreds, sometimes thousands, of genes. These gene segments of DNA (deoxyribonucleic acid) encode data that the cell expresses as proteins to build and operate the various parts of the body. The seven billion faces in the world, all different, reveal individual differences in our genetic makeup. But so much of our collective DNA is the same that we share a common genetic heritage: the human genome.

To comprehend genomes is to begin to unlock the mysteries of life. One of the aims of the Human Genome Project, an international research program launched in 1990, was to map and then sequence every bit of DNA in a composite human genome. The project was heralded as the first step toward personalized medicine, a new age in health care when prevention and treatment of illnesses would be guided by examining a person’s genome and genetic predispositions. Understandably, expectations for the Human Genome Project ran high, and in 1996 President Bill Clinton glowingly foretold a not-too-distant future in which parents, armed with a map of their newborn’s genetic structure, could identify the risks for illness. In his vision, the fruits of the project would help “organize the diet plan, the exercise plan, the medical treatment that would enable untold numbers of people to have far more full lives.”

When the HGP was completed in 2003, that vision was still out of reach. Thanks to technological advances, it’s now on the horizon. The expense of genomic sequencing is falling fast; in Canada today it costs $10,000 to sequence an individual genome. “Once a whole genome costs $1,000 or less, entire families will get their genomes sequenced,” says Michael Hayden, director of the Centre for Molecular Medicine and Therapeutics at the University of British Columbia. “But what will they do with that information?” Whole-genome sequencing generates enormous amounts of raw data that must be analyzed by highly qualified medical geneticists and genetic counsellors, both in short supply (Canada has about eighty medical geneticists and 230 genetic counsellors). “DNA Sequencing Caught in Deluge of Data,” ran one recent headline in the New York Times, reflecting a common view that modern medicine doesn’t yet have the expertise to tell us what this data means, much less how to act on it.  (...)

As the demand for whole-genome sequencing grows, so will profits, but the big money in personalized medicine will come from the development of treatments. Progress to date has been slow and confined to monogenic diseases such as Huntington’s, whose origin lies in a mutation on a single gene inherited from one parent. Because monogenic diseases are relatively rare, sequencing the genomes of those affected generates a manageable amount of data. Yet only 10 percent of monogenic diseases have yielded to treatment. On the other hand, multigenic disorders, such as cancer, diabetes, or Alzheimer’s, result from a complex interplay of genetic mutations and environmental factors. A given mutation on a person’s genome may not necessarily express as a malignant disease, so identifying the probability of a multigenic disease is extremely challenging. Traditional indicators such as family history, diet, and lifestyle may still be far more predictive than genetic testing for individual risk.

Compounding the problem, the bodily pathway of a multigenic disorder is complex and difficult to trace, and each person’s metabolism responds in a highly idiosyncratic way to the conditions that cause disease. To discover how individuals’ systems respond to the genetic risk for a multigenic disease requires comparing data gathered from the genomes of thousands of test subjects, ideally involving research findings and tissue samples from bio-banks worldwide. And once potential treatments for these disorders are identified, they require long-term clinical trials.

Convincing governments and other funders to support these kinds of initiatives rather than searching for a magic bullet to cure a disease such as cancer presents a challenge. “Getting population cohort studies launched in Canada is very difficult,” says Tom Hudson, president and scientific director of the Ontario Institute for Cancer Research. “It’s less sexy than funding basic human genome research.” Hudson has made consulting with clinicians and assessing their requirements a high priority. “We need to turn the question around,” he says. “We have to identify the medical need and make sure our research programs create paths to address those clinical questions. It’s like starting a puzzle from the end.”

More problematic is the reality that the human genome is still a vast catalogue of the unknown and scarcely known. The Human Genome Project’s most startling finding was that human genes, as currently defined, make up less than 2 percent of all the DNA on the genome, and that the total number of genes is relatively small. Scientists had predicted there might be 80,000 to 140,000 human genes, but the current tally is fewer than 25,000 — as one scientific paper put it, somewhere between that of a chicken and a grape. The remaining 98 percent of our DNA, once dismissed as “junk DNA,” is now taken more seriously. Researchers have focused on introns, in the gaps between the coding segments of genes, which may play a crucial role in regulating gene expression, by switching them on and off in response to environmental stimuli.

by Mark Czarnecki, The Walrus |  Read more:
Illustration by Alain Pilon