Thursday, May 9, 2013

He Conceived the Mathematics of Roughness

Benoit Mandelbrot, the brilliant Polish-French-American mathematician who died in 2010, had a poet’s taste for complexity and strangeness. His genius for noticing deep links among far-flung phenomena led him to create a new branch of geometry, one that has deepened our understanding of both natural forms and patterns of human behavior. The key to it is a simple yet elusive idea, that of self-similarity.

To see what self-similarity means, consider a homely example: the cauliflower. Take a head of this vegetable and observe its form—the way it is composed of florets. Pull off one of those florets. What does it look like? It looks like a little head of cauliflower, with its own subflorets. Now pull off one of those subflorets. What does that look like? A still tinier cauliflower. If you continue this process—and you may soon need a magnifying glass—you’ll find that the smaller and smaller pieces all resemble the head you started with. The cauliflower is thus said to be self-similar. Each of its parts echoes the whole.

Other self-similar phenomena, each with its distinctive form, include clouds, coastlines, bolts of lightning, clusters of galaxies, the network of blood vessels in our bodies, and, quite possibly, the pattern of ups and downs in financial markets. The closer you look at a coastline, the more you find it is jagged, not smooth, and each jagged segment contains smaller, similarly jagged segments that can be described by Mandelbrot’s methods. Because of the essential roughness of self-similar forms, classical mathematics is ill-equipped to deal with them. Its methods, from the Greeks on down to the last century, have been better suited to smooth forms, like circles. (Note that a circle is not self-similar: if you cut it up into smaller and smaller segments, those segments become nearly straight.)

Only in the last few decades has a mathematics of roughness emerged, one that can get a grip on self-similarity and kindred matters like turbulence, noise, clustering, and chaos. And Mandelbrot was the prime mover behind it. He had a peripatetic career, but he spent much of it as a researcher for IBM in upstate New York. In the late 1970s he became famous for popularizing the idea of self-similarity, and for coining the word “fractal” (from the Latin fractus, meaning broken) to designate self-similar forms. In 1980 he discovered the “Mandelbrot set,” whose shape—it looks a bit like a warty snowman or beetle—came to represent the newly fashionable science of chaos. What is perhaps less well known about Mandelbrot is the subversive work he did in economics. The financial models he created, based on his fractal ideas, implied that stock and currency markets were far riskier than the reigning consensus in business schools and investment banks supposed, and that wild gyrations—like the 777-point plunge in the Dow on September 29, 2008—were inevitable. (...)

It was in casting about for a thesis topic that he had his first Keplerian glimmer. One day Uncle Szolem—who by now had written off Mandelbrot as a loss to mathematics—disdainfully pulled from a wastebasket and handed to him a reprint about something called Zipf’s law. The brainchild of an eccentric Harvard linguist named George Kingsley Zipf, this law concerns the frequency with which different words occur in written texts—newspaper articles, books, and so on. The most frequently occurring word in written English is “the,” followed by “of” and then “and.” Zipf ranked all the words in this way, and then plotted their frequency of usage. The resulting curve had an odd shape. Instead of falling gradually from the most common word to the least common, as one might expect, it plunged sharply at first and then leveled off into a long and gently sloping tail—rather like the path of a ski jumper. This shape indicates extreme inequality: a few hundred top-ranked words do almost all the work, while the large majority languish in desuetude. (If anything, Zipf underestimated this linguistic inequality: he was using James Joyce’s Ulysses, rich in esoteric words, as one of his main data sources.) The “law” Zipf came up with was a simple yet precise numerical relation between a word’s rank and its frequency of usage.

Zipf’s law, which has been shown to hold for all languages, may seem a trifle. But the same basic principle turns out to be valid for a great variety of phenomena, including the size of islands, the populations of cities, the amount of time a book spends on the best-seller list, the number of links to a given website, and—as the Italian economist Vilfredo Pareto had discovered in the 1890s—a country’s distribution of income and wealth. All of these are examples of “power law” distributions.* Power laws apply, in nature or society, where there is extreme inequality or unevenness: where a high peak (corresponding to a handful of huge cities, or frequently used words, or very rich people) is followed by a low “long tail” (corresponding to a multitude of small towns, or rare words, or wage slaves). In such cases, the notion of “average” is meaningless.

by Jim Holt, NY Review of Books |  Read more:
Image: Hank Morgan/Time Life Pictures/Getty Images