Sunday, September 30, 2012
My Life as a Replacement Ref: Three Unlikely Months Inside the NFL
Time's Sean Gregory spoke Friday with Jerry Frump, a long-time college football referee who served as a “replacement ref” during the recent NFL labor dispute. Highlights from the conversation, including Frump’s thoughts on the wide range of experience among his replacement colleagues, can be found here. Full transcript below:
Sean Gregory: When did you first start officiating? I believe you’ve done a bunch of games – what they used to call I-AA. How did you first start refereeing, when you were a kid?
Jerry Frump: I started in basketball first. And after one year in basketball an opportunity came up, a friend of mine said, “Do you want to try football?” I had never been a very good athlete, I was very small in high school, didn’t get my growth spurt, I guess if there was one, until later. But I got involved in officiating at a very young age.
How old were you when you started refereeing basketball?
I would have been 21.
And you played high school football?
I was a bench warmer. Small town. Like I said, I wasn’t very big, but I got my interest and what abilities I had probably after most guys had gotten involved and learned the fundamentals. But nonetheless I just loved sports. And so this became my passion.
I officiated basketball, I coached and officiated little league baseball, softball, semi-pro baseball, football, did a little bit of everything. And after a number of years my vocation caused me to move to the Chicago area. Starting off in a large area like this, it’s kind of starting over with your refereeing career, but I had an opportunity and got a few breaks with people and got involved and continued working at the high school level in the Chicago area then got involved working some junior college and Division III football. Along the way, it’s kind of a pecking order. You get some recognition, and somebody takes an interest in you at the next level and brings you along. And I had a supervisor at the Division III level who was very instrumental in pushing me to the next level and that was how I got involved in what was the Gateway Conference, which is now known as the Mountain Valley Conference. I had officiated that for 14 years. And shortly after getting involved in that back in 2001, you may recall that the NFL had another labor walkout and dispute. And I think I was one of about a half a dozen officials involved in the 2012 season who was also involved in 2001.
Circumstances in 2001 were significantly different. I think we had about four hours of training before they put us in a preseason game, but it was a very unique experience and something that I still remember to this day. Most of the guys worked also the first regular season game. I was one of the guys who could not get from my college game to the pro game the next day in time. So they had people that were on a crew and then they had some supplemental or alternatives that they had brought in for this purpose. At that time the NFL was willing to work with the college schedules, work around everybody’s timelines; this time it was made clear up front that that was not going to be the situation. They knew that this was going to be a more contentious negotiation. They said, “you have to make a choice.” As the NFL was putting out feelers for interested officials, the supervisors put out a notice that if you choose to make that decision, then obviously you’re sacrificing your college season – and probably your career. They didn’t say your career, but you could read between the lines. I’ve been officiating for over 40 years, this is my 41st or 42nd year of officiating football, period. It was an opportunity as I neared the end of my career, that I didn’t want to look back one day a year or two from now and say “gee, I wonder what if.”
So I rolled the dice and did that not knowing whether I would ever get on the field for a preseason game. And certainly not believing that it would go beyond that to get into the regular season, but you know, we did.
So you read between the lines, that if you worked for the NFL, you’d be out this season but possibly not be able to get back in.
That was the rumor. As a crew chief, there’s a lot of responsibilities put on you. I certainly knew and understood that in doing this, it left him in a lurch and it was a business decision that the supervisor had to make. I didn’t take it as a threat. I knew that if this got into the regular season, he couldn’t at the last minute try to bring in and put in a new crew chief in place. So I understand why they had to make that ultimatum.
And have you reached out to them to see where you’re at?
I have not.
Are you operating under the assumption right now that they might not let you back this year or down the road?
Correct.
And you feel like it’s almost like a blacklisting?
No. I think it’s a matter of when you step aside, somebody else is going to take over. For me to come back as a crew chief means that they’ve either got to get rid of somebody else, there’s got to be another opening, and there’s no guarantees of that.
Jerry Frump: I started in basketball first. And after one year in basketball an opportunity came up, a friend of mine said, “Do you want to try football?” I had never been a very good athlete, I was very small in high school, didn’t get my growth spurt, I guess if there was one, until later. But I got involved in officiating at a very young age.
How old were you when you started refereeing basketball?
I would have been 21.
And you played high school football?
I was a bench warmer. Small town. Like I said, I wasn’t very big, but I got my interest and what abilities I had probably after most guys had gotten involved and learned the fundamentals. But nonetheless I just loved sports. And so this became my passion.
I officiated basketball, I coached and officiated little league baseball, softball, semi-pro baseball, football, did a little bit of everything. And after a number of years my vocation caused me to move to the Chicago area. Starting off in a large area like this, it’s kind of starting over with your refereeing career, but I had an opportunity and got a few breaks with people and got involved and continued working at the high school level in the Chicago area then got involved working some junior college and Division III football. Along the way, it’s kind of a pecking order. You get some recognition, and somebody takes an interest in you at the next level and brings you along. And I had a supervisor at the Division III level who was very instrumental in pushing me to the next level and that was how I got involved in what was the Gateway Conference, which is now known as the Mountain Valley Conference. I had officiated that for 14 years. And shortly after getting involved in that back in 2001, you may recall that the NFL had another labor walkout and dispute. And I think I was one of about a half a dozen officials involved in the 2012 season who was also involved in 2001.
Circumstances in 2001 were significantly different. I think we had about four hours of training before they put us in a preseason game, but it was a very unique experience and something that I still remember to this day. Most of the guys worked also the first regular season game. I was one of the guys who could not get from my college game to the pro game the next day in time. So they had people that were on a crew and then they had some supplemental or alternatives that they had brought in for this purpose. At that time the NFL was willing to work with the college schedules, work around everybody’s timelines; this time it was made clear up front that that was not going to be the situation. They knew that this was going to be a more contentious negotiation. They said, “you have to make a choice.” As the NFL was putting out feelers for interested officials, the supervisors put out a notice that if you choose to make that decision, then obviously you’re sacrificing your college season – and probably your career. They didn’t say your career, but you could read between the lines. I’ve been officiating for over 40 years, this is my 41st or 42nd year of officiating football, period. It was an opportunity as I neared the end of my career, that I didn’t want to look back one day a year or two from now and say “gee, I wonder what if.”
So I rolled the dice and did that not knowing whether I would ever get on the field for a preseason game. And certainly not believing that it would go beyond that to get into the regular season, but you know, we did.
So you read between the lines, that if you worked for the NFL, you’d be out this season but possibly not be able to get back in.
That was the rumor. As a crew chief, there’s a lot of responsibilities put on you. I certainly knew and understood that in doing this, it left him in a lurch and it was a business decision that the supervisor had to make. I didn’t take it as a threat. I knew that if this got into the regular season, he couldn’t at the last minute try to bring in and put in a new crew chief in place. So I understand why they had to make that ultimatum.
And have you reached out to them to see where you’re at?
I have not.
Are you operating under the assumption right now that they might not let you back this year or down the road?
Correct.
And you feel like it’s almost like a blacklisting?
No. I think it’s a matter of when you step aside, somebody else is going to take over. For me to come back as a crew chief means that they’ve either got to get rid of somebody else, there’s got to be another opening, and there’s no guarantees of that.
by Sean Gregory, Time | Read more:
Photo: George Gojkovich/Getty ImagesHow to Make Almost Anything
A new digital revolution is coming, this time in fabrication. It draws on the same insights that led to the earlier digitizations of communication and computation, but now what is being programmed is the physical world rather than the virtual one. Digital fabrication will allow individuals to design and produce tangible objects on demand, wherever and whenever they need them. Widespread access to these technologies will challenge traditional models of business, aid, and education.
The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.
Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.
In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing plastic filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs.
Additive manufacturing has been widely hailed as a revolution, featured on the cover of publications from Wired to The Economist. This is, however, a curious sort of revolution, proclaimed more by its observers than its practitioners. In a well-equipped workshop, a 3-D printer might be used for about a quarter of the jobs, with other machines doing the rest. One reason is that the printers are slow, taking hours or even days to make things. Other computer-controlled tools can produce parts faster, or with finer features, or that are larger, lighter, or stronger. Glowing articles about 3-D printers read like the stories in the 1950s that proclaimed that microwave ovens were the future of cooking. Microwaves are convenient, but they don’t replace the rest of the kitchen.
The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data. That is what is coming; for some perspective, there is a close analogy with the history of computing. The first step in that development was the arrival of large mainframe computers in the 1950s, which only corporations, governments, and elite institutions could afford. Next came the development of minicomputers in the 1960s, led by Digital Equipment Corporation’s PDP family of computers, which was based on MIT’s first transistorized computer, the TX-0. These brought down the cost of a computer from hundreds of thousands of dollars to tens of thousands. That was still too much for an individual but was affordable for research groups, university departments, and smaller companies. The people who used these devices developed the applications for just about everything one does now on a computer: sending e-mail, writing in a word processor, playing video games, listening to music. After minicomputers came hobbyist computers. The best known of these, the MITS Altair 8800, was sold in 1975 for about $1,000 assembled or about $400 in kit form. Its capabilities were rudimentary, but it changed the lives of a generation of computing pioneers, who could now own a machine individually. Finally, computing truly turned personal with the appearance of the IBM personal computer in 1981. It was relatively compact, easy to use, useful, and affordable.
Just as with the old mainframes, only institutions can afford the modern versions of the early bulky and expensive computer-controlled milling devices. In the 1980s, first-generation rapid prototyping systems from companies such as 3D Systems, Stratasys, Epilog Laser, and Universal brought the price of computer-controlled manufacturing systems down from hundreds of thousands of dollars to tens of thousands, making them attractive to research groups. The next-generation digital fabrication products on the market now, such as the RepRap, the MakerBot, the Ultimaker, the PopFab, and the MTM Snap, sell for thousands of dollars assembled or hundreds of dollars as parts. Unlike the digital fabrication tools that came before them, these tools have plans that are typically freely shared, so that those who own the tools (like those who owned the hobbyist computers) can not only use them but also make more of them and modify them. Integrated personal digital fabricators comparable to the personal computer do not yet exist, but they will.
Personal fabrication has been around for years as a science-fiction staple. When the crew of the TV series Star Trek: The Next Generation was confronted by a particularly challenging plot development, they could use the onboard replicator to make whatever they needed. Scientists at a number of labs (including mine) are now working on the real thing, developing processes that can place individual atoms and molecules into whatever structure they want. Unlike 3-D printers today, these will be able to build complete functional systems at once, with no need for parts to be assembled. The aim is to not only produce the parts for a drone, for example, but build a complete vehicle that can fly right out of the printer. This goal is still years away, but it is not necessary to wait: most of the computer functions one uses today were invented in the minicomputer era, long before they would flourish in the era of personal computing. Similarly, although today’s digital manufacturing machines are still in their infancy, they can already be used to make (almost) anything, anywhere. That changes everything.
The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.
Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.
In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing plastic filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs.
Additive manufacturing has been widely hailed as a revolution, featured on the cover of publications from Wired to The Economist. This is, however, a curious sort of revolution, proclaimed more by its observers than its practitioners. In a well-equipped workshop, a 3-D printer might be used for about a quarter of the jobs, with other machines doing the rest. One reason is that the printers are slow, taking hours or even days to make things. Other computer-controlled tools can produce parts faster, or with finer features, or that are larger, lighter, or stronger. Glowing articles about 3-D printers read like the stories in the 1950s that proclaimed that microwave ovens were the future of cooking. Microwaves are convenient, but they don’t replace the rest of the kitchen.
The revolution is not additive versus subtractive manufacturing; it is the ability to turn data into things and things into data. That is what is coming; for some perspective, there is a close analogy with the history of computing. The first step in that development was the arrival of large mainframe computers in the 1950s, which only corporations, governments, and elite institutions could afford. Next came the development of minicomputers in the 1960s, led by Digital Equipment Corporation’s PDP family of computers, which was based on MIT’s first transistorized computer, the TX-0. These brought down the cost of a computer from hundreds of thousands of dollars to tens of thousands. That was still too much for an individual but was affordable for research groups, university departments, and smaller companies. The people who used these devices developed the applications for just about everything one does now on a computer: sending e-mail, writing in a word processor, playing video games, listening to music. After minicomputers came hobbyist computers. The best known of these, the MITS Altair 8800, was sold in 1975 for about $1,000 assembled or about $400 in kit form. Its capabilities were rudimentary, but it changed the lives of a generation of computing pioneers, who could now own a machine individually. Finally, computing truly turned personal with the appearance of the IBM personal computer in 1981. It was relatively compact, easy to use, useful, and affordable.
Just as with the old mainframes, only institutions can afford the modern versions of the early bulky and expensive computer-controlled milling devices. In the 1980s, first-generation rapid prototyping systems from companies such as 3D Systems, Stratasys, Epilog Laser, and Universal brought the price of computer-controlled manufacturing systems down from hundreds of thousands of dollars to tens of thousands, making them attractive to research groups. The next-generation digital fabrication products on the market now, such as the RepRap, the MakerBot, the Ultimaker, the PopFab, and the MTM Snap, sell for thousands of dollars assembled or hundreds of dollars as parts. Unlike the digital fabrication tools that came before them, these tools have plans that are typically freely shared, so that those who own the tools (like those who owned the hobbyist computers) can not only use them but also make more of them and modify them. Integrated personal digital fabricators comparable to the personal computer do not yet exist, but they will.
Personal fabrication has been around for years as a science-fiction staple. When the crew of the TV series Star Trek: The Next Generation was confronted by a particularly challenging plot development, they could use the onboard replicator to make whatever they needed. Scientists at a number of labs (including mine) are now working on the real thing, developing processes that can place individual atoms and molecules into whatever structure they want. Unlike 3-D printers today, these will be able to build complete functional systems at once, with no need for parts to be assembled. The aim is to not only produce the parts for a drone, for example, but build a complete vehicle that can fly right out of the printer. This goal is still years away, but it is not necessary to wait: most of the computer functions one uses today were invented in the minicomputer era, long before they would flourish in the era of personal computing. Similarly, although today’s digital manufacturing machines are still in their infancy, they can already be used to make (almost) anything, anywhere. That changes everything.
Will We Ever Predict Earthquakes?
Indeed, some scientists, such as Robert Geller from the University of Tokyo, think that prediction is outright impossible. In a 1997 paper, starkly titled Earthquakes Cannot Be Predicted, he argues that the factors that influence the birth and growth of earthquakes are so numerous and complex that measuring and analysing them is a fool’s errand. Nothing in the last 15 years has changed his mind. In an email to me, he wrote: “All serious scientists know there are no prospects in the immediate future.”
Finding fault
Earthquakes start when two of the Earth’s tectonic plates – the huge, moving slabs of land that carry the continents – move around each other. The plates squash, stretch and catch against each other, storing energy which is then suddenly released, breaking and shaking the rock around them.
Those are the basics; the details are much more complex. Ross Stein from the United States Geological Survey explains the problem by comparing tectonic plates to a brick sitting on a desk, and attached to a fishing rod by a rubber band. You can reel it in to mimic the shifting plates, and because the rubber band is elastic, just like the Earth’s crust, the brick doesn’t slide smoothly. Instead, as you turn the reel, the band stretches until, suddenly, the brick zips forward. That’s an earthquake.
If you did this 10 times, says Stein, you would see a huge difference in the number of turns it took to move the brick, or in the distance the brick slid before stopping. “Even when we simplify the Earth down to this ridiculous extreme, we still don’t get regular earthquakes,” he says. The Earth, of course, isn’t simple. The mass, elasticity and friction of the sliding plates vary between different areas, or even different parts of the same fault. All these factors can influence where an earthquake starts (which, Stein says, can be an area as small as your living room), when it starts, how strong it is, and how long it lasts. “We have no business thinking we’ll see regular periodic earthquakes in the crust,” he says.
That hasn’t stopped people from trying to find “anomalies” that reliably precede an earthquake, including animals acting strangely, radon gas seeping from rocks, patterns of precursor earthquakes, and electromagnetic signals from pressurised rocks. None of these have been backed by strong evidence. Studying such “anomalies” may eventually tell us something useful about the physics of earthquakes, but their value towards a predictive test is questionable.
by Ed Yong, Not Exactly Rocket Science | Read more:
The Keys to the Park
Alexander Rower, a grandson of Mr. Calder, lives on Gramercy Park, as does Samuel G. White, whose great-grandfather was Stanford White, and who has taken on an advisory role in a major redesign of its landscaping. Both are key-holders who, validated by an impressive heritage, are exerting a significant influence on Gramercy Park’s 21st-century profile. Because Gramercy is fenced, not walled in, the Calder and the rest of the evolving interior scenery are visible in all seasons to passers-by and the legions of dog-walkers who daily patrol the perimeter.
Parkside residents rationalize that their communal front yard is privatized for its own protection. Besides, they, not the city it enhances, have footed its bills for nearly two centuries. Any of the 39 buildings on the park that fails to pay the yearly assessment fee of $7,500 per lot, which grants it two keys — fees and keys multiply accordingly for buildings on multiple lots — will have its key privileges rescinded. The penalty is so painful that it has never had to be applied.
For connection-challenged mortals, though, the park is increasingly problematic to appreciate from within, particularly now that Arthur W. and William Lie Zeckendorf, and Robert A. M. Stern, the architect of their 15 Central Park West project, are recalibrating property values in a stratospheric direction by bringing the neighborhood its first-ever $42 million duplex penthouse, at 18 Gramercy Park South, formerly a Salvation Army residence for single women.
The unique housewarming gift the Zeckendorfs decided to bestow on the buyer-who-has-everything types purchasing there is none other than a small metallic item they might not already own: a personal key to the park. (...)
The locks and keys are changed every year, and the four gates are, for further safekeeping, self-locking: the key is required for exiting as well as entering.
“In a way it’s kind of a priceless amenity,” said Maurice Mann, the landlord who restored 36 Gramercy Park East, “because everyone is so enamored with the park, and owning a key still holds a certain amount of bragging rights and prestige. Not everybody can have one, so it’s like, if there’s something I can’t have, I want it.”
by Robin Finn, NY Times | Read more:
Photo: Chang W. Lee
Saturday, September 29, 2012
Fender: A Guitar Maker Aims to Stay Plugged In
In 1948, a radio repairman named Leo Fender took a piece of ash, bolted on a length of maple and attached an electronic transducer.
You know the rest, even if you don’t know you know the rest.
You’ve heard it — in the guitar riffs of Buddy Holly, Jimi Hendrix, George Harrison, Keith Richards, Eric Clapton, Pete Townshend, Bruce Springsteen, Mark Knopfler, Kurt Cobain and on and on.
It’s the sound of a Fender electric guitar. Mr. Fender’s company, now known as the Fender Musical Instruments Corporation, is the world’s largest maker of guitars. Its Stratocaster, which made its debut in 1954, is still a top seller. For many, the Strat’s cutting tone and sexy, double-cutaway curves mean rock ’n’ roll.
But this heart of rock isn’t beating quite the way it once did. Like many other American manufacturers, Fender is struggling to hold on to what it’s got in a tight economy. Sales and profits are down this year. A Strat, after all, is what economists call a consumer discretionary item — a nonessential.
More than macroeconomics, however, is at work here. Fender, based in Scottsdale, Ariz., is also being buffeted by powerful forces on Wall Street.
A private investment firm, Weston Presidio, controls nearly half of the company and has been looking for an exit. It pushed to take Fender public in March, to howls in the guitar-o-sphere that Fender was selling out. But, to Fender’s embarrassment, investors balked. They were worried about the lofty price and, even more, about how Fender can keep growing.
And that, really, is the crux of the matter. Times have changed, and so has music. In the 1950s, ’60s and ’70s, electric guitars powered rock and pop. Today, turntable rigs, drum machines and sampler synthesizers drive music like hip-hop. Electric guitars, huge as they are, have lost some of their old magic in this era of Jay-Z, Kanye West and “The Voice.”
Games like Guitar Hero have helped underpin sales, but teenagers who once might have hankered after guitars now get by making music on laptops. It’s worth remembering that the accordion was once the most popular instrument in America.
Granted, Fender is such a powerful brand that it can ride out the lean times. But sales of all kinds of musical instruments plunged during the recession, and they still haven’t recovered fully. Sales of all instruments in the United States totaled $6.5 billion last year, down roughly 13 percent from their peak in 2005, according to Music Trades, which tracks the industry.
Many of the guitars that are selling these days are cheap ones made in places like China — ones that cost a small fraction of, say, a $1,599 Fender Artist “Eric Clapton” Strat. Fender has been making its own lines of inexpensive guitars overseas for years, but the question is how the company can keep growing and compete profitably in a fast-moving, global marketplace. Its margins are already under pressure.
“What possible niche is left unexploited by Fender?” asks Jeffrey Bronchick, founder of Cove Street Capital, an investment advisory firm in El Segundo, Calif., and the owner of some 40 guitars, including four Fenders.
by Janet Morrissey, NY Times | Read more:
Photo: Monica Almeida/
How Wikipedia Works
Not only is Wikipedia one of the most used resources for data-gathering and seemingly instantaneous information retrieval – it’s also free to use with no advertisements clogging up the interface (except those quirky requests to donate). So just how does such a key knowledge resource function, and who are the faces behind such an indispensable modern take on the traditional Encyclopaedia?
In 1994, Ward Cunningham created a website format previously unknown to Internet users at the time: one that would drive knowledge creation and collation to new heights. This website was called a Wiki.
This style of website propelled the user into the collaboration spotlight, by encouraging anyone to update and edit internet-hosted content in real time. Although this early version of collaborative content creation now seems standard, remember that this was both pre social media and prior to the multitudes of current platforms and applications that cater for users who are keen to create, edit and share information in an aggregation space.
On his personal web page, Ward says of the Wiki:
From the outset, Nupedia had performance problems. In its first year, Nupedia had only 20 or so articles approved for publication, with as many as 150 drafts stagnating in the yet-to-be published vault. It was also assumed by the founders of Nupedia that scholars and experts would want to voluntarily provide high-end content, regardless of the absence of incentives to do so. Then there was the infighting between Sanger and Wales, with Sanger determined to adhere strongly to the content control of all published material and demanding more reliable content (Sanger would go on to create a more academically-robust alternative called Citizendium that is still in operation).
What further added to the demise of Nupedia was the fact that both Sanger and Wales wanted to adopt the Wiki format in order to utilise elements they thought would act to enhance the existing Nupedia model, such as ease of editing, less restricted review processes and a more inclusive and open approach to information organization. Thus, Wikipedia was born as a side-project to help enhance Nupedia – instead, it ended up eclipsing it and providing the trigger for Nupedia’s eventual demise.
When the non-profit Wikipedia project went live in January 2001, both Sanger and Wales had no idea that this side-project would become the force it is today. By 2002, this knowledge repository contained upwards of 20,000 entries: at the end of 2006, it had reached the 1 million article mark. Wikipedia itself provides the latest up-to-date stats regarding the current statistics concerning the present contributor base and popularity:
As of September 2012, Wikipedia includes over 23 million freely usable articles in 285 languages, written by over 36 million registered users and numerous anonymous contributors worldwide. According to Alexa Internet, Wikipedia is the world’s sixth-most-popular website, visited monthly by around 12% of all internet users.
This style of website propelled the user into the collaboration spotlight, by encouraging anyone to update and edit internet-hosted content in real time. Although this early version of collaborative content creation now seems standard, remember that this was both pre social media and prior to the multitudes of current platforms and applications that cater for users who are keen to create, edit and share information in an aggregation space.
On his personal web page, Ward says of the Wiki:
The idea of a “Wiki” may seem odd at first, but dive in, explore its links and it will soon seem familiar. “Wiki” is a composition system; it’s a discussion medium; it’s a repository; it’s a mail system; it’s a tool for collaboration. We don’t know quite what it is, but we do know it’s a fun way to communicate asynchronously across the network.In 2000, this original concept of tying large amounts of content to an open, networked based collation system propelled Jimmy Wales and Larry Sanger to create Nupedia. Nupedia (unlike Wikipedia) was initially created as a for-profit project that acted as an online encyclopaedia funded by Bomis. The project had lofty ideals: it was to be free to access for all users and all content was to be academically robust, with a mandatory peer-review policy as well as a 7-step review process.
From the outset, Nupedia had performance problems. In its first year, Nupedia had only 20 or so articles approved for publication, with as many as 150 drafts stagnating in the yet-to-be published vault. It was also assumed by the founders of Nupedia that scholars and experts would want to voluntarily provide high-end content, regardless of the absence of incentives to do so. Then there was the infighting between Sanger and Wales, with Sanger determined to adhere strongly to the content control of all published material and demanding more reliable content (Sanger would go on to create a more academically-robust alternative called Citizendium that is still in operation).
What further added to the demise of Nupedia was the fact that both Sanger and Wales wanted to adopt the Wiki format in order to utilise elements they thought would act to enhance the existing Nupedia model, such as ease of editing, less restricted review processes and a more inclusive and open approach to information organization. Thus, Wikipedia was born as a side-project to help enhance Nupedia – instead, it ended up eclipsing it and providing the trigger for Nupedia’s eventual demise.
When the non-profit Wikipedia project went live in January 2001, both Sanger and Wales had no idea that this side-project would become the force it is today. By 2002, this knowledge repository contained upwards of 20,000 entries: at the end of 2006, it had reached the 1 million article mark. Wikipedia itself provides the latest up-to-date stats regarding the current statistics concerning the present contributor base and popularity:
As of September 2012, Wikipedia includes over 23 million freely usable articles in 285 languages, written by over 36 million registered users and numerous anonymous contributors worldwide. According to Alexa Internet, Wikipedia is the world’s sixth-most-popular website, visited monthly by around 12% of all internet users.
by Mez Breeze, The Next Web | Read more:
Photo: Mandel Ngan/Getty Images[ed. I've been experiencing this a lot lately and it pisses me off (sometimes even the ad won't load). My usual response is to just shut the whole thing down and move on. Hopefully video delivery systems will find a better way to monetize their content than by alienating viewers.]
via
Native Tongues
All of this first group of cars head off to the south. As they part, the riders wave their farewells, whereupon each member of this curious small squadron officially commences his long outbound adventure—toward a clutch of carefully selected small towns, some of them hundreds and even thousands of miles away. These first few cars are bound to cities situated in the more obscure corners of Florida, Oklahoma, and Alabama. Other cars that would follow later then went off to yet more cities and towns scattered evenly across every corner of every mainland state in America. The scene as the cars leave Madison is dreamy and tinted with romance, especially seen at the remove of nearly fifty years. Certainly nothing about it would seem to have anything remotely to do with the thankless drudgery of lexicography.
But it had everything to do with the business, not of illicit love, interstate crime, or the secret movement of monies, but of dictionary making. For the cars, which would become briefly famous, at least in the somewhat fame-starved world of lexicography, were the University of Wisconsin Word Wagons. All were customized 1966 Dodge A100 Sportsman models, purchased en masse with government grant money. Equipped for long-haul journeying, they were powered by the legendarily indestructible Chrysler Slant-Six 170-horsepower engine and appointed with modest domestic fixings that included a camp bed, sink, and stove. Each also had two cumbersome reel-to-reel tape recorders and a large number of tape spools.
The drivers and passengers who manned the wagons were volunteers bent to one overarching task: that of collecting America’s other language. They were being sent to more than a thousand cities, towns, villages, and hamlets to discover and record, before it became too late and everyone started to speak like everybody else, the oral evidence of exactly what words and phrases Americans in those places spoke, heard, and read, out in the boondocks and across the prairies, down in the hollows and up on the ranges, clear across the great beyond and in the not very long ago.
These volunteers were charged with their duties by someone who might at first blush seem utterly unsuitable for the task of examining American speech: a Briton, born in Kingston, of a Canadian father and a Jamaican mother: Frederic Gomes Cassidy, a man whose reputation—he died twelve years ago, aged ninety-two—is now about to be consolidated as one of the greatest lexicographers this country has ever known. Cassidy’s standing—he is now widely regarded as this continent’s answer to James Murray, the first editor of the Oxford English Dictionary; Cassidy was a longtime English professor at the University of Wisconsin, while Murray’s chops were earned at Oxford—rests on one magnificent achievement: his creation of a monumental dictionary of American dialect speech, conceived roughly half a century ago, and over which he presided for most of his professional life.
The five-thousand-page, five-volume book, known formally as the Dictionary of American Regional English and colloquially just as DARE, is now at last fully complete. The first volume appeared in 1985: it listed tens of thousands of geographically specific dialect words, from tall flowering plants known in the South as “Aaron’s Rod,” to a kind of soup much favored in Wisconsin, made from duck’s blood, known as “czarina.” The next two volumes appeared in the 1990s, the fourth after 2000, so assiduously planned and organized by Cassidy as to be uninterrupted by his passing. The fifth and final volume, the culminating triumph of this extraordinary project, is being published this March—it offers up regionalisms running alphabetically from “slab highway” (as concrete-covered roads are apparently still known in Indiana and Missouri) to “zydeco,” not the music itself, but a kind of raucous and high-energy musical party that is held in a long swathe of villages arcing from Galveston to Baton Rouge.
“Aaron’s rod” to “zydeco”—between these two verbal bookends lies an immense and largely hidden American vocabulary, one that surely, more than perhaps any other aspect of society, reveals the wonderfully chaotic pluribusout of which two centuries of commerce and convention have forged the duller reality of the unum. Which was precisely what Cassidy and his fellow editors sought to do—to capture, before it faded away, the linguistic coat of many colors of this immigrant-made country, and to preserve it in snapshot, in part for strictly academic purposes, in part for the good of history, and in part, maybe, on the off chance that the best of the lexicon might one day be revived.
by Simon Winchester, Lapham's Quarterly | Read more:
Photo: UW-Madison ArchivesCan Etsy Go Pro Without Losing Its Soul?
Two years after setting up her online shop, Terri Johnson had the kind of holiday season most business owners dream about. By Thanksgiving 2009, orders for her custom-embroidered goods started streaming in at a breakneck pace. And the volume only increased heading into December. Johnson was hardly feeling festive, though. To get the merchandise out the door, she worked nonstop, hunched over the embroidery machine in her basement, stitching robes, aprons, and shirts until just a few days before Christmas. “I was barely seeing my family,” she recalls. The problem was that Johnson’s main venue, shopmemento, is a storefront on Etsy.com. And she feared that if she hired help, invested in new equipment, or rented a commercial workspace, she might run afoul of Etsy policies and get kicked off the site.
After all, Etsy was designed as a marketplace for “the handmade.” The whole point is that the site offers a way for individual makers to connect with individual buyers. But trying to keep up with orders on her own was threatening to turn Johnson’s business into a one-woman sweatshop. Etsy rules allow “collectives,” but that’s a vague and unbusinesslike term. “No one knows what it means,” she says. After the holiday crush, Johnson was so spent that she shuttered her store for the entire month of January to recover. She knew that if she wanted to build a real business, she’d eventually have to scale up production. She wondered if she had outgrown Etsy.
This was a big problem for Johnson, but it was also troubling for Etsy. Today the site attracts 42 million unique visitors a month, who browse almost 15 million products. More than 800,000 sellers use the service. Most are producing handmade goods as a sideline. But losing motivated sellers like Johnson, who are making a full-time living on Etsy, means saying good-bye to a hugely profitable part of its community.
From its start in 2005, Etsy was a rhetoric-heavy enterprise that promised to do more than simply turn a profit. It promoted itself as an economy-shifter, making possible a parallel retail universe that countered the alienation of mass production with personal connections and unique, handcrafted items. There was no reason to outsource manufacturing, the thinking went, if a sea of individual sellers took the act of making into their own hands—literally.
The approach worked well enough to establish the startup. Etsy makes money from every listing (20 cents apiece) as well as every sale (a 3.5 percent cut). It has been profitable since 2009, and in July 2012 year-over-year sales were up more than 75 percent. Not bad for a retailer selling mostly nonessential products during one of the most sluggish chapters in the history of American consumer spending.
But now Etsy finds itself at a crossroads. Sellers like Johnson, reaching the limits of what the service allows (as well as what it can do for them), are being forced to consider moving on. Meanwhile, the hobbyists and artisans who make up the rest of the marketplace still value Etsy’s founding ethos—that handmade items have an intrinsic value that should be celebrated and given a forum outside of traditional retail.
How to reconcile these competing visions of what it means to be an Etsy seller isn’t clear. While the site wants to remain an accessible entry point for newbies, it doesn’t want the narrative arc for successful sellers to arrive at the inevitable plot point: “And then I started a real business.”
After all, Etsy was designed as a marketplace for “the handmade.” The whole point is that the site offers a way for individual makers to connect with individual buyers. But trying to keep up with orders on her own was threatening to turn Johnson’s business into a one-woman sweatshop. Etsy rules allow “collectives,” but that’s a vague and unbusinesslike term. “No one knows what it means,” she says. After the holiday crush, Johnson was so spent that she shuttered her store for the entire month of January to recover. She knew that if she wanted to build a real business, she’d eventually have to scale up production. She wondered if she had outgrown Etsy.
This was a big problem for Johnson, but it was also troubling for Etsy. Today the site attracts 42 million unique visitors a month, who browse almost 15 million products. More than 800,000 sellers use the service. Most are producing handmade goods as a sideline. But losing motivated sellers like Johnson, who are making a full-time living on Etsy, means saying good-bye to a hugely profitable part of its community.
From its start in 2005, Etsy was a rhetoric-heavy enterprise that promised to do more than simply turn a profit. It promoted itself as an economy-shifter, making possible a parallel retail universe that countered the alienation of mass production with personal connections and unique, handcrafted items. There was no reason to outsource manufacturing, the thinking went, if a sea of individual sellers took the act of making into their own hands—literally.
The approach worked well enough to establish the startup. Etsy makes money from every listing (20 cents apiece) as well as every sale (a 3.5 percent cut). It has been profitable since 2009, and in July 2012 year-over-year sales were up more than 75 percent. Not bad for a retailer selling mostly nonessential products during one of the most sluggish chapters in the history of American consumer spending.
But now Etsy finds itself at a crossroads. Sellers like Johnson, reaching the limits of what the service allows (as well as what it can do for them), are being forced to consider moving on. Meanwhile, the hobbyists and artisans who make up the rest of the marketplace still value Etsy’s founding ethos—that handmade items have an intrinsic value that should be celebrated and given a forum outside of traditional retail.
How to reconcile these competing visions of what it means to be an Etsy seller isn’t clear. While the site wants to remain an accessible entry point for newbies, it doesn’t want the narrative arc for successful sellers to arrive at the inevitable plot point: “And then I started a real business.”
by Rob Walker, Wired | Read more:
Photo: Zachary ZavislakArthur O. Sulzberger, Publisher Who Changed The Times, Dies at 86
[ed. One of the longest obituaries I think I've ever read. A history of the New York Times reflected in the life of Mr. Sulzberger.]
His death, after a long illness, was announced by his family.
Mr. Sulzberger’s tenure, as publisher of the newspaper and as chairman and chief executive of The New York Times Company, reached across 34 years, from the heyday of postwar America to the twilight of the 20th century, from the era of hot lead and Linotype machines to the birth of the digital world.
The paper he took over as publisher in 1963 was the paper it had been for decades: respected and influential, often setting the national agenda. But it was also in precarious financial condition and somewhat insular, having been a tightly held family operation since 1896, when it was bought by his grandfather Adolph S. Ochs.
By the 1990s, when Mr. Sulzberger passed the reins to his son, first as publisher in 1992 and then as chairman in 1997, the enterprise had been transformed. The Times was now national in scope, distributed from coast to coast, and it had become the heart of a diversified, multibillion-dollar media operation that came to encompass newspapers, magazines, television and radio stations and online ventures.
The expansion reflected Mr. Sulzberger’s belief that a news organization, above all, had to be profitable if it hoped to maintain a vibrant, independent voice. As John F. Akers, a retired chairman of I.B.M. and for many years a Times company board member, put it, “Making money so that you could continue to do good journalism was always a fundamental part of the thinking.”
Mr. Sulzberger’s insistence on independence was shown in his decision in 1971 to publish a secret government history of the Vietnam War known as the Pentagon Papers. It was a defining moment for him and, in the view of many journalists and historians, his finest.
In thousands of pages, this highly classified archive detailed Washington’s legacy of deceit and evasion as it stumbled through an unpopular war. When the Pentagon Papers were divulged in a series of articles in June 1971, an embarrassed Nixon administration demanded that the series be stopped immediately, citing national security considerations. The Times refused, on First Amendment grounds, and won its case in the United States Supreme Court in a landmark ruling on press freedom. (...)
A newspaper publisher may be a business executive, but the head of an institution like The Times is also inevitably cast as a leader in legal defenses of the First Amendment. It was a role Mr. Sulzberger embraced, and never with more enduring consequences than in his decision to publish the Pentagon Papers.
“This was not a breach of the national security,” Mr. Sulzberger said at the time. “We gave away no national secrets. We didn’t jeopardize any American soldiers or Marines overseas.” Of the government, he added, “It’s a wonderful way if you’ve got egg on your face to prevent anybody from knowing it, stamp it secret and put it away.”
The government obtained a temporary restraining order from a federal judge in Manhattan. It was the first time in United States history that a court, on national security grounds, had stopped a newspaper in advance from publishing a specific article. The Washington Post soon began running its own articles based on the same documents, and both papers took their case to the Supreme Court. In late June, the court issued its decision rejecting the administration’s national-security arguments and upholding a newspaper’s right to publish in the face of efforts to impose “prior restraint.”
The significance of that ruling for the future of government-press relations has been debated. But this much was certain: It established the primacy of a free press in the face of a government’s insistence on secrecy. In the 40 years since the court handed down its ruling, there has not been another instance of officially sanctioned prior restraint to keep an American newspaper from printing secret information on national security grounds.
In a 1996 speech to a group of journalists, Mr. Sulzberger said of the documents that he “had no doubt but that the American people had a right to read them and that we at The Times had an obligation to publish them.” But typically — he had an unpretentious manner and could not resist a good joke or, for that matter, a bad pun — he tried to keep even a matter this weighty from becoming too ponderous.
The fact is, Mr. Sulzberger said, the documents were tough sledding. “Until I read the Pentagon Papers,” he said, “I did not know that it was possible to read and sleep at the same time.”
Nor did he understand why President Richard M. Nixon had fought so hard “to squelch these papers,” he added.
“I would have thought that he would bemoan their publication, joyfully blame the mess on Lyndon Johnson and move on to Watergate,” Mr. Sulzberger said. “But then I never understood Washington.”
Friday, September 28, 2012
My Name is Joe Biden and I’ll Be Your Server
Folks, when I was six years old my dad came to me one night. My dad was a car guy. Hard worker, decent guy. Hadn’t had an easy life. He climbed the stairs to my room one night and he sat on the edge of my bed and he said to me, he said, “Champ, your mom worked hard on that dinner tonight. She worked hard on it. She literally worked on it for hours. And when you and your brothers told her you didn’t like it, you know what, Joey? That hurt her. It hurt.” And I felt (lowers voice to a husky whisper) ashamed. Because lemme tell you something. He was right. My dad was right. My mom worked hard on that dinner, and it was delicious. Almost as delicious as our Chicken Fontina Quesadilla with Garlicky Guacamole. That’s our special appetizer tonight. It’s the special. It’s the special. (His voice rising) And the chef worked hard on it, just like my mom, God love her, and if you believe in the chef’s values of hard work and creative spicing you should order it, although if you don’t like chicken we can substitute shrimp for a small upcharge.
Thank you. Thank you. Now, hold on. There’s something else you need to know.
Our fish special is halibut with a mango-avocado salsa and Yukon Gold potatoes, and it’s market-priced at sixteen-ninety-five. Sounds like a lot of money, right? Sounds like “Hey, Joe, that’s a piece of fish and a little topping there, and some potatoes.” “Bidaydas,” my great-grandmother from County Louth would have called ’em. You know what I’m talking about. Just simple, basic, sitting-around-the-kitchen-table-on-a-Tuesday-night food. Nothin’ fancy, right? But, folks, that’s not the whole story. If you believe that, you’re not . . . getting . . . the whole . . . story. Because lemme tell you about these Yukon Gold potatoes. These Yukon Gold potatoes are brushed with extra-virgin olive oil and hand-sprinkled with pink Himalayan sea salt, and then José, our prep guy. . . . Well. Lemme tell you about José. (He pauses, looks down, clears his throat.)
I get . . . I get emotional talking about José. This is a guy who—José gets here at ten in the morning. Every morning, rain or shine. Takes the bus here. Has to transfer twice. Literally gets off one bus and onto another. Twice. Never complains. Rain, snow, it’s hailin’ out there. . . . The guy literally does not complain. Never. Never heard it. José walks in, hangs his coat on a hook, big smile on his face, says hello to everybody—Sal the dishwasher, Angie the sous-chef, Frank, Donna, Pat. . . . And then do you know what he does? Do you know what José does? I’ll tell you what he does, and folks, folks, this is the point I want to make. With his own hands, he sprinkles fresh house-grown rosemary on those potatoes (raises voice to a thundering crescendo), and they are golden brown on the outside and soft on the inside and they are delicious! They are delicious! They are delicious!
by Bill Barol, New Yorker | Read more:
Illustration: Miguel GallardoGlass Works
The office of Wendell Weeks, Corning’s CEO, is on the second floor, looking out onto the Chemung River. It was here that Steve Jobs gave the 53-year-old Weeks a seemingly impossible task: Make millions of square feet of ultrathin, ultrastrong glass that didn’t yet exist. Oh, and do it in six months. The story of their collaboration—including Jobs’ attempt to lecture Weeks on the principles of glass and his insistence that such a feat could be accomplished—is well known. How Corning actually pulled it off is not.
Weeks joined Corning in 1983; before assuming the top post in 2005, he oversaw both the company’s television and specialty glass businesses. Talk to him about glass and he describes it as something exotic and beautiful—a material whose potential is just starting to be unlocked by scientists. He’ll gush about its inherent touchability and authenticity, only to segue into a lecture about radio-frequency transparency. “There’s a sort of fundamental truth in the design value of glass,” Weeks says, holding up a clear pebble of the stuff. “It’s like a found object; it’s cool to the touch; it’s smooth but has surface to it. What you’d really want is for this to come alive. That’d be a perfect product.”
Weeks and Jobs shared an appreciation for design. Both men obsessed over details. And both gravitated toward big challenges and ideas. But while Jobs was dictatorial in his management style, Weeks (like many of his predecessors at Corning) tends to encourage a degree of insubordination. “The separation between myself and any of the bench scientists is nonexistent,” he says. “We can work in these small teams in a very relaxed way that’s still hyperintense.”
Indeed, even though it’s a big company—29,000 employees and revenue of $7.9 billion in 2011—Corning still thinks and acts like a small one, something made easier by its relatively remote location, an annual attrition rate that hovers around 1 percent, and a vast institutional memory. (Stookey, now 97, and other legends still roam the halls and labs of Sullivan Park, Corning’s R&D facility.) “We’re all lifers here,” Weeks says, smiling. “We’ve known each other for a long time and succeeded and failed together a number of times.”
One of the first conversations between Weeks and Jobs actually had nothing to do with glass. Corning scientists were toying around with microprojection technologies—specifically, better ways of using synthetic green lasers. The thought was that people wouldn’t want to stare at tiny cell phone screens to watch movies and TV shows, and projection seemed like a natural solution. But when Weeks spoke to Jobs about it, Apple’s chief called the idea dumb. He did mention he was working on something better, though—a device whose entire surface was a display. It was called the iPhone.
by Brian Gardner, Wired | Read more:
Photo: Max Aguilera-HellwegMeet Mira, the Supercomputer That Makes Universes
The real challenge for cosmology is figuring out exactly what happened to those first nascent galaxies. Our telescopes don't let us watch them in time-lapse; we can't fast forward our images of the young universe. Instead, cosmologists must craft mathematical narratives that explain why some of those galaxies flew apart from one another, while others merged and fell into the enormous clusters and filaments that we see around us today. Even when cosmologists manage to cobble together a plausible such story, they find it difficult to check their work. If you can't see a galaxy at every stage of its evolution, how do you make sure your story about it matches up with reality? How do you follow a galaxy through nearly all of time? Thanks to the astonishing computational power of supercomputers, a solution to this problem is beginning to emerge: You build a new universe.
In October, the world's third fastest supercomputer, Mira, is scheduled to run the largest, most complex universe simulation ever attempted. The simulation will cram more than 12 billion years worth of cosmic evolution into just two weeks, tracking trillions of particles as they slowly coalesce into the web-like structure that defines our universe on a large scale. Cosmic simulations have been around for decades, but the technology needed to run a trillion-particle simulation only recently became available. Thanks to Moore's Law, that technology is getting better every year. If Moore's Law holds, the supercomputers of the late 2010s will be a thousand times more powerful than Mira and her peers. That means computational cosmologists will be able to run more simulations at faster speeds and higher resolutions. The virtual universes they create will become the testing ground for our most sophisticated ideas about the cosmos.
Salman Habib is a senior physicist at the Argonne National Laboratory and the leader of the research team working with Mira to create simulations of the universe. Last week, I talked to Habib about cosmology, supercomputing, and what Mira might tell us about the enormous cosmic web we find ourselves in.
Help me get a handle on how your project is going to work. As I understand it, you're going to create a computer simulation of the early universe just after the Big Bang, and in this simulation you will have trillions of virtual particles interacting with each other -- and with the laws of physics -- over a time period of more than 13 billion years. And once the simulation has run its course, you'll be looking to see if what comes out at the end resembles what we see with our telescopes. Is that right?
Habib: That's a good approximation of it. Our primary interest is large-scale structure formation throughout the universe and so we try to begin our simulations well after the Big Bang, and even well after the microwave background era. Let me explain why. We're not sure how to simulate the very beginning of the universe because the physics are very complicated and partially unknown, and even if we could, the early universe is structurally homogenous relative to the complexity that we see now, so you don't need a supercomputer to simulate it. Later on, at the time of the microwave background radiation, we have a much better idea about what's going on. WMAP andPlanck have given us a really clear picture of what the universe looked like at that time, but even then the universe is still very homogenous -- its density perturbations are something like one part in a hundred thousand. With that kind of homogeneity, you can still do the calculations and modeling without a supercomputer. But if you fast forward to the point where the universe is about a million times denser than it is now, that's when things get so complicated that you want to hand over the calculations to a supercomputer.
Now the trillions of particles we're talking about aren't supposed to be actual physical particles like protons or neutrons or whatever. Because these trillions of particles are meant to represent the entire universe, they are extremely massive, something in the range of a billion suns. We know the gravitational mechanics of how these particles interact, and so we evolve them forward to see what kind of densities and structure they produce, both as a result of gravity and the expansion of the universe. So, that's essentially what the simulation does: it takes an initial condition and moves it forward to the present to see if our ideas about structure formation in the universe are correct.
by Ross Andersen, The Atlantic | Read more:
Photo:Argonne National LaboratoryThursday, September 27, 2012
Yes, Texas is Different
At one point, the screens go black and we see projected in white letters:
TEXAS IS BIGGER THAN FRANCE AND ENGLAND
Black again. Then (you knew it was coming):
…COMBINED.
A bit later, another black screen/white letters sequence:
BEFORE TEXAS WAS A STATE…
(portentous pause)
TEXAS WAS A NATION
This, of course, is a reference to the Republic of Texas, as this spacious corner of the world styled itself from 1836 to 1846. In truth, the Republic of Texas was a transitional entity, the larval stage of the State of Texas. Nevertheless, “The Star of Destiny” has a point. Texas is different. It is big, for a start. Not as big as Alaska, which is bigger than France and England and Germany and Japan … combined, but big enough. And it was a nominally independent if ramshackle republic, with embassies and a Congress and everything. Vermont, Hawaii, and, arguably, California were once independent republics, too, but they don’t make a fetish of it. Texas does.
Texas is different. The qualities—the very existence—of the Bob Bullock Museum of Texas State History are evidence of that. Modesty is not the museum’s keynote. On the plaza out front is a huge sculpture of a five-pointed star. It must be twenty feet high. (“Mmm, subtle,” our ninth-grader murmured.) Inside, the exhibits are an uneasy combination of ethnic correctness and unrestrained boasting. One would think that Texas, besides being very, very great, has always been ruled by a kind of U.N. Security Council consisting of one white male, one white female, one black person, one American Indian, and one Mexican or Mexican-American, all of them exemplars of—the phrase is repeated ad nauseam—courage, determination, and hard work.
The stories the exhibits tell are mostly about the state’s economy, agricultural and industrial. Whether it’s oil extraction or cattle raising, rice farming or silicon chipmaking, quicksilver mining or sheepherding, the elements of each are usually the same. A few men become extraordinarily rich. These men are praised for their courage, determination, and hard work. The laborers whose labor produces their wealth are ruthlessly exploited. (The exhibits don’t put it this way, obviously, but the facts are there if you have eyes to see them.) These unfortunates may be poor white men; they may be Mexican immigrant women; they may be enslaved blacks or African-Americans held in sharecropper peonage. They, too, are praised for their courage, determination, and hard work. It all adds up to an unending progression of triumphs for the Texas spirit.
The boasting does not take long to taste a little sour. It begins to feel defensive and insecure. One begins to sense that the museum, on some level, knows that a lot of it is, well, bullockshit.
And yet, and yet. There are redeeming grace notes. The current temporary exhibit at the Bob Bullock Museum is one of them. It’s about Texas music: blues, rock, country, country rock, bluegrass, singer-songwriter, alt-whatever. In this exhibit, the boastfulness feels like simple accuracy and the nods to “diversity” are not a stretch. Respect is shown, properly, to Willie Nelson, Leadbelly, Stevie Ray Vaughan (whose battered Stratocaster occupies a place of honor), Janis Joplin, Big Mama Thornton, and many equally deserving others. And, as befits Austin, there’s live music. During our visit, a fine, fringed six-piece cowboy-country band played and sang a tribute to mid-century radio. All was forgiven.
Does the name Bob Bullock ring a bell? As lieutenant governor “under” George W. Bush (in Texas the post is independently elected and has powers that rival those of the governorship itself), Bob Bullock (1929-1999), a Democrat, was responsible for Dubya’s pre-Presidential reputation for bipartisanship and moderation. In his long career in state government, Bullock was, as far as I can tell, a net plus for Texas, even if his late-in-life Bush-enabling made him a net minus for the nation and the world. But you have to hand it to Texas. How many states would name their enormous marble-clad museum of state history not after a big donor but after a backroom career politician who, by the way, was also a five-times-married alcoholic?
by Hendrick Hertzberg, New Yorker | Read more:
Photo: Paul Morse
It's a Drone World
“A TV drone flies beside Canada’s Erick Guay during the second practice of the men’s Alpine skiing World Cup downhill race at the Lauberhorn in Wengen, January 12, 2012.” - Reuters (via)
[ed. I think when we look back on this decade the rise of drones (and robotics in general) will be viewed as one of the most significant developments affecting the future, on par with cloud computing and digital money as game changing technologies. Certainly the art of warfare has been altered forever. Eventually, everyone will have drones deployed for some purpose or another (countries, corporations, scientists, terrorist, etc. etc.). Want to spy on your ex-wife, pre-plan your next hiking trip, have your pizza delivered hot and fresh? There will be a drone business that can help you with that -- probably already is. In any case, near-surface airspace will soon get a lot more crowded (not to mention personal airspace, when hummingbird and insect drones are perfected.]
h/t New Inquiry
Gaston La Touche (French, 1854-1913), Pardon in Brittany, 1896. Oil on canvas. Art Institute of Chicago.
Are Hackers Heroes?
On the last day of June of this year, a tech website called Redmond Pie posted two articles in quick succession that, on their face, had nothing to do with each other. The first, with the headline “Root Nexus 7 on Android 4.1 Jelly Bean, Unlock Bootloader, And Flash ClockworkMod Recovery,” was a tutorial on how to modify the software—mainly in order to gain control of the operating system—in Google’s brand-new tablet computer, the Nexus 7, a device so fresh that it hadn’t yet shipped to consumers.
The second headline was slightly more decipherable to the casual reader: “NewOS X Tibet Malware Puts in an Appearance, Sends User’s Personal Information to a Remote Server.” That story, which referred to the discovery of a so-called “Trojan horse” computer virus on certain machines in Tibet, pointed out that Apple computers were no longer as impervious to malicious viruses and worms as they had been in the past and that this attack, which targeted Tibetan activists against the Chinese regime, was not random but political. When the Tibetan activists downloaded the infected file, it would secretly connect their computers to a server in China that could monitor their activities and capture the contents of their machines. (The Redmond Pie writer speculated that the reason Apple computers were targeted in this attack was that they were the preferred brand of the Dalai Lama.)
In fact, the Nexus 7 story and the Tibetan Trojan horse story were both about the same thing: hacking and hackers, although the hacking done by the Nexus 7 hackers—who contribute to an online website called Rootzwiki—was very different from that done by the crew homing in on the Tibetan activists. Hacking and hackers have become such inclusive, generic terms that their meaning, now, must almost always be derived from the context. Still, in the last few years, after the British phone-hacking scandal, after Anonymous and LulzSec, after Stuxnet, in which Americans and Israelis used a computer virus to break centrifuges and delay the Iranian nuclear project, after any number of identity thefts, that context has tended to accent the destructive side of hacking.
The second headline was slightly more decipherable to the casual reader: “NewOS X Tibet Malware Puts in an Appearance, Sends User’s Personal Information to a Remote Server.” That story, which referred to the discovery of a so-called “Trojan horse” computer virus on certain machines in Tibet, pointed out that Apple computers were no longer as impervious to malicious viruses and worms as they had been in the past and that this attack, which targeted Tibetan activists against the Chinese regime, was not random but political. When the Tibetan activists downloaded the infected file, it would secretly connect their computers to a server in China that could monitor their activities and capture the contents of their machines. (The Redmond Pie writer speculated that the reason Apple computers were targeted in this attack was that they were the preferred brand of the Dalai Lama.)
In fact, the Nexus 7 story and the Tibetan Trojan horse story were both about the same thing: hacking and hackers, although the hacking done by the Nexus 7 hackers—who contribute to an online website called Rootzwiki—was very different from that done by the crew homing in on the Tibetan activists. Hacking and hackers have become such inclusive, generic terms that their meaning, now, must almost always be derived from the context. Still, in the last few years, after the British phone-hacking scandal, after Anonymous and LulzSec, after Stuxnet, in which Americans and Israelis used a computer virus to break centrifuges and delay the Iranian nuclear project, after any number of identity thefts, that context has tended to accent the destructive side of hacking.
In February, when Facebook CEO Mark Zuckerberg observed in his letter to potential shareholders before taking the company public that Facebook embraced a philosophy called “The Hacker Way,” he was not being provocative but, rather, trying to tip the balance in the other direction. (He was also drawing on the words of the veteran technology reporter Steven Levy, whose 1984 book Hackers: Heroes of the Computer Revolution was the first serious attempt to understand the subculture that gave us Steve Jobs, Steve Wozniak, and Bill Gates.) According to Zuckerberg:
In reality, hacking just means building something quickly or testing the boundaries of what can be done. Like most things, it can be used for good or bad, but the vast majority of hackers I’ve met tend to be idealistic people who want to have a positive impact on the world…. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it—often in the face of people who say it’s impossible or are content with the status quo.Though it might seem neutral, the word “fix” turns out to be open to interpretation. Was the new Google Nexus 7 tablet broken before it was boxed up and shipped? Not to Google or to the vast majority of people who ordered it, but yes to those who saw its specifications and noticed, for instance, that it had a relatively small amount of built-in memory, and wanted to enable the machine to accept an external storage device that could greatly expand its memory. Similarly, there was nothing wrong with the original iPhone—it worked just fine. But for users hoping to load software that was not authored or vetted by Apple, and those who didn’t want to be restricted to a particular service provider (AT&T), and those who liked to tinker and considered it their right as owners to do so, the various “jailbreaks”—or ways of circumventing such restrictions—provided by hackers have addressed and, in Zuckerberg’s term, “fixed” these issues.1
Apple, on the other hand, did not see it this way and argued to the United States Copyright Office that modifying an iPhone’s operating system constituted copyright infringement and thus was illegal. In a ruling in 2010, the Copyright Office disagreed, stating that there was “no basis for copyright law to assist Apple in protecting its restrictive business model.” Copyright laws vary country to country, though, and already this year three people in Japan have been arrested under that country’s recently updated Unfair Competition Prevention Act for modifying—i.e., hacking—Nintendo game consoles. As for the Nexus 7 hackers, they need not worry: Google’s Android software is “open source,” meaning that it is released to the public, which is free to fiddle with it, to an extent.
The salient point of Mark Zuckerberg’s paean to hackers, and the reason he took the opportunity to inform potential shareholders, is that hacking can, and often does, improve products. It exposes vulnerabilities, supplies innovations, and demonstrates both what is possible and what consumers want. Still, as Zuckerberg also intimated, hacking has a dark side, one that has eclipsed its playful, sporty, creative side, especially in the popular imagination, and with good reason. Hacking has become the preferred tool for a certain kind of thief, one who lifts money from electronic bank accounts and sells personal information, particularly as it relates to credit cards and passwords, in a thriving international Internet underground. Hacking has also become a method used for extortion, public humiliation, business disruption, intellectual property theft, espionage, and, possibly, war.
by Sue Halpern, NY Review of Books | Read more:
Photo: Paul Grover/Rex Features/AP ImagesProportion Control
No other number attracts such a fevered following as the golden ratio. Approximately equal to 1.618 and denoted by the Greek letter phi, it’s been canonized as the “Divine Proportion.” Its devotees will tell you it’s ubiquitous in nature, art and architecture. And there are plastic surgeons and financial mavens who will tell you it’s the secret to pretty faces and handsome returns.
Not bad for the second-most famous irrational number. In your face, pi!
It even made a cameo appearance in “The Da Vinci Code.” While trying to decipher the clues left at the murder scene in the Louvre that opens the novel, the hero, Robert Langdon, “felt himself suddenly reeling back to Harvard, standing in front of his ‘Symbolism in Art’ class, writing his favorite number on the chalkboard. 1.618.”
Langdon tells his class that, among other astonishing things, da Vinci “was the first to show that the human body is literally made of building blocks whose proportional ratios always equal phi.”
The golden ratio originated in the ideal world of geometry. The Pythagoreans discovered it in their studies of regular pentagons, pentagrams and other geometric figures. A few hundred years later, Euclid gave the first written description of the golden ratio in connection with the problem of dividing a line segment into two unequal parts, such that the whole is to the long part as the long is to the short.
Not bad for the second-most famous irrational number. In your face, pi!
It even made a cameo appearance in “The Da Vinci Code.” While trying to decipher the clues left at the murder scene in the Louvre that opens the novel, the hero, Robert Langdon, “felt himself suddenly reeling back to Harvard, standing in front of his ‘Symbolism in Art’ class, writing his favorite number on the chalkboard. 1.618.”
Langdon tells his class that, among other astonishing things, da Vinci “was the first to show that the human body is literally made of building blocks whose proportional ratios always equal phi.”
“Don’t believe me?” Langdon challenged. “Next time you’re in the shower, take a tape measure.”
A couple of football players snickered.
“Not just you insecure jocks,” Langdon prompted. “All of you. Guys and girls. Try it. Measure the distance from the tip of your head to the floor. Then divide that by the distance from your belly button to the floor. Guess what number you get.”
“Not phi!” one of the jocks blurted out in disbelief.
“Yes, phi,” Langdon replied. “One-point-six-one-eight. [...] My friends, each of you is a walking tribute to the Divine Proportion.”I tried it. I’m 6-foot-1, and my belly button is 44 inches from the floor. So my ratio is 73 inches divided by 44 inches, which is about 1.66. That’s about 2.5 percent bigger than 1.618. But then again, nobody ever mistook me for Apollo.
The golden ratio originated in the ideal world of geometry. The Pythagoreans discovered it in their studies of regular pentagons, pentagrams and other geometric figures. A few hundred years later, Euclid gave the first written description of the golden ratio in connection with the problem of dividing a line segment into two unequal parts, such that the whole is to the long part as the long is to the short.
Subscribe to:
Posts (Atom)