Friday, November 7, 2014
Participatory Budgeting: The City That Gave Its Residents $3 Million
This was supposed to be a happy story. A story about a town reeling from bankruptcy, violence, and crime, that brought its residents together with an innovative strategy, one that other cities across the country are trying to emulate. This was to be a story with a happy ending full of community gardens, puppy neutering, and even some repaired roads.
But there are few happy endings in city government these days, with spiraling pension costs, sluggish economic recovery, and Americans’ general fatigue when it comes to civic engagement. And this story is no exception.
It starts in 2011 in Vallejo, a hilly and charming town of 117,000 just a ferry ride from San Francisco. The town’s streets are dotted with pastel Victorian homes and palm trees, its waterfront has a walking path with vistas of distant yellow and green hills, and its population is one of the most diverse in the country, evenly divided among whites, blacks, Latinos, and Filipinos.
But Vallejo has struggled for years. Crippled by high pension costs and public-employee salaries, it filed for bankruptcy in 2008. Things didn’t get much better after the city emerged from Chapter 9 in 2011: Crime was bad and the city’s police department was perpetually short-staffed. There were 10 murders in 2010, 14 in 2012, and 24 in 2013.
Services had been cut to the bone: Dan Keen, the current city manager, said that when he came on board in 2012, every department was operating at staff levels lower than he’d ever seen in his career.
“The city was still shell-shocked, still reeling from the reduction in services,” he told me from his office near the waterfront.
Trying to figure out how to avoid yet another romp through bankruptcy, leaders proposed a 1 percent sales tax in 2011. Residents begrudgingly agreed–50.4 percent of the town voted to adopt the sales tax, and the proposal passed by just 159 votes. One of the opponents’ biggest concerns, voiced by Councilwoman Marti Brown, was that the new revenues would go to employee salaries and pensions, just as much of the town’s money had before, and that residents would be even more tired of Vallejo’s cycle of taxing and spending with nothing to show for it.
Then Brown had an idea.
She’d read about an experiment in Brazil where a local government allowed its residents to suggest and vote on ideas for how to spend tax dollars. The process, with the clunky name Participatory Budgeting, allowed citizens, rather than politicians, to decide how to spend infrastructure dollars. Would this be a way, she wondered, of making sure Vallejo’s money would be spent in a way its residents liked?
“I thought, there’s got to be a better way to do budgeting, there needs to be more transparency,” she said, in an interview. “It was a huge part of what helped propel Vallejo toward bankruptcy–that financial process is not a transparent process. The public doesn’t get to see it, they don’t know anything about it.”
She started talking with the Participatory Budgeting Project, a group that had formed at the World Social Forum in Brazil in 2005 in an effort to bring participatory budgeting to the U.S. With their help, Brown made Vallejo’s city council a proposal: Take a chunk of the money raised from the new sales tax, and let Vallejoans spend it how they wished, as long as the ideas they came up with benefited the public and could be implemented by the city or in collaboration with public agencies, non-profits, or religious institutions. The Project had helped Chicago adopt this type of budgeting in its 49th ward, but Brown's idea was bigger: to do it citywide.
When Brown floated the idea, it was not universally popular. This was a city, after all, just recovering from bankruptcy, that needed every penny. The idea of letting residents do almost anything with millions of dollars rubbed some officials the wrong way.
“You are telling a city council that’s just been handed $10 million dollars—after going through years of cutbacks—you can’t decide how that money is going to be spent, “ said Keen, the city manager. “You can understand that for some elected officials, that’s pretty tough.”
But participatory budgeting had made a big difference in some cities in Brazil. In a comprehensive study, academics found that Brazilian cities that allowed citizens to decide what to do with public funds saw lower infant mortality and increased spending on services such as education and sanitation, compared to cities that did not adopt the process.
Brown suggested that this would be a way to help people have faith in their local community once again, and proposed that about one-third of the sales tax money, about $3.2 million, would be set aside for residents to control directly.
Participatory budgeting, or PB, as the residents called it, barely passed in city council when it went up for a vote in April 2012, with Brown and her contingent winning 4-3. Some residents were furious, calling the idea that it would improve democracy “utter nonsense” in letters to the local newspaper, the Vallejo Times Herald. Even the mayor disliked the idea, telling reporters “this is not the time to fund whatever we want.”
Still, many residents were curious. They started showing up at meetings, and taking on tasks: writing the “rule book,” which would govern how the process worked; attending budget assemblies, where residents brainstormed how to spend the money; and joining the steering committee, a group of 21 locals that planned meetings and reached out to locals to get them involved.
Ravi Shankar, a longtime Vallejo resident who joined the steering committee (no, not that Ravi Shankar) said that getting involved in participatory budgeting seemed like a way to put the people-first sentiments of the Occupy movement into use.
“People started coming together in extraordinary ways,” he said.
But there are few happy endings in city government these days, with spiraling pension costs, sluggish economic recovery, and Americans’ general fatigue when it comes to civic engagement. And this story is no exception.

But Vallejo has struggled for years. Crippled by high pension costs and public-employee salaries, it filed for bankruptcy in 2008. Things didn’t get much better after the city emerged from Chapter 9 in 2011: Crime was bad and the city’s police department was perpetually short-staffed. There were 10 murders in 2010, 14 in 2012, and 24 in 2013.
Services had been cut to the bone: Dan Keen, the current city manager, said that when he came on board in 2012, every department was operating at staff levels lower than he’d ever seen in his career.
“The city was still shell-shocked, still reeling from the reduction in services,” he told me from his office near the waterfront.
Trying to figure out how to avoid yet another romp through bankruptcy, leaders proposed a 1 percent sales tax in 2011. Residents begrudgingly agreed–50.4 percent of the town voted to adopt the sales tax, and the proposal passed by just 159 votes. One of the opponents’ biggest concerns, voiced by Councilwoman Marti Brown, was that the new revenues would go to employee salaries and pensions, just as much of the town’s money had before, and that residents would be even more tired of Vallejo’s cycle of taxing and spending with nothing to show for it.
Then Brown had an idea.
She’d read about an experiment in Brazil where a local government allowed its residents to suggest and vote on ideas for how to spend tax dollars. The process, with the clunky name Participatory Budgeting, allowed citizens, rather than politicians, to decide how to spend infrastructure dollars. Would this be a way, she wondered, of making sure Vallejo’s money would be spent in a way its residents liked?
“I thought, there’s got to be a better way to do budgeting, there needs to be more transparency,” she said, in an interview. “It was a huge part of what helped propel Vallejo toward bankruptcy–that financial process is not a transparent process. The public doesn’t get to see it, they don’t know anything about it.”
She started talking with the Participatory Budgeting Project, a group that had formed at the World Social Forum in Brazil in 2005 in an effort to bring participatory budgeting to the U.S. With their help, Brown made Vallejo’s city council a proposal: Take a chunk of the money raised from the new sales tax, and let Vallejoans spend it how they wished, as long as the ideas they came up with benefited the public and could be implemented by the city or in collaboration with public agencies, non-profits, or religious institutions. The Project had helped Chicago adopt this type of budgeting in its 49th ward, but Brown's idea was bigger: to do it citywide.
When Brown floated the idea, it was not universally popular. This was a city, after all, just recovering from bankruptcy, that needed every penny. The idea of letting residents do almost anything with millions of dollars rubbed some officials the wrong way.
“You are telling a city council that’s just been handed $10 million dollars—after going through years of cutbacks—you can’t decide how that money is going to be spent, “ said Keen, the city manager. “You can understand that for some elected officials, that’s pretty tough.”
But participatory budgeting had made a big difference in some cities in Brazil. In a comprehensive study, academics found that Brazilian cities that allowed citizens to decide what to do with public funds saw lower infant mortality and increased spending on services such as education and sanitation, compared to cities that did not adopt the process.
Brown suggested that this would be a way to help people have faith in their local community once again, and proposed that about one-third of the sales tax money, about $3.2 million, would be set aside for residents to control directly.
Participatory budgeting, or PB, as the residents called it, barely passed in city council when it went up for a vote in April 2012, with Brown and her contingent winning 4-3. Some residents were furious, calling the idea that it would improve democracy “utter nonsense” in letters to the local newspaper, the Vallejo Times Herald. Even the mayor disliked the idea, telling reporters “this is not the time to fund whatever we want.”
Still, many residents were curious. They started showing up at meetings, and taking on tasks: writing the “rule book,” which would govern how the process worked; attending budget assemblies, where residents brainstormed how to spend the money; and joining the steering committee, a group of 21 locals that planned meetings and reached out to locals to get them involved.
Ravi Shankar, a longtime Vallejo resident who joined the steering committee (no, not that Ravi Shankar) said that getting involved in participatory budgeting seemed like a way to put the people-first sentiments of the Occupy movement into use.
“People started coming together in extraordinary ways,” he said.
by Alana Semuels, Atlantic | Read more:
Image: Robert Galbraith/ReutersThursday, November 6, 2014
Head in Ass
[See also: National Review's warning about a "governing trap" (I know, it hurts my head too). Response here]
Like millions of other Americans who made the decision — or mistake, depending on your point of view — to donate money to Democratic Party candidates in the past, my inbox was filled with messages for months warning me that only the generosity of ordinary citizens like myself could prevent a Republican landslide in the midterm elections. Sometimes these requests bore an air of reasonability. But most sounded desperate.
Yet for all the urgency of these solicitations, few of them gave any sense of what donors might expect in return for their support. Democratic fundraising campaigns had no problem conjuring the specter of shadowy right-wing cabals “buying” elections. But they rarely even considered the prospect that they were also selling a product.
Imagine seeing an advertisement for a toothpaste whose sole message was that it wasn’t the leading brand. The only way anyone would become passionate about this alternative is if its competitor were successfully portrayed as ineffective or, worse still, poisonous. Realistically, though, in advanced consumer societies like the United States, the number of people who might be convinced that their choice of toothpaste is a life-or-death matter is not great. Most of them know full well that the only substantive difference between mainstream brands is a matter of aesthetics, how they look or taste.
Perhaps it’s unfair to imply that the two major political parties in the United States are basically Crest and Colgate. There are some hot-button issues where the Republican and Democratic line diverge enough to result in real-world consequences: reproductive health, collective bargaining, environmental safeguards. Yet there can be no denying that more and more Americans have concluded that both parties are far more interested in self-preservation than trying to address the nation’s most pressing problems.
That’s why their approval ratings have fallen to such an abysmal level. To most middle-of-the-road visitors, the only thing worse than the party out of office is the party in office. And that’s the primary reason why the Democratic Party was bound to lose ground this election cycle regardless of its choice of strategy: holding the White House and a slim majority in the Senate was sufficient to make it the target of the electorate’s disaffection.
by Charlie Bertsch, Souciant | Read more:
Image: uncredited
Like millions of other Americans who made the decision — or mistake, depending on your point of view — to donate money to Democratic Party candidates in the past, my inbox was filled with messages for months warning me that only the generosity of ordinary citizens like myself could prevent a Republican landslide in the midterm elections. Sometimes these requests bore an air of reasonability. But most sounded desperate.

Imagine seeing an advertisement for a toothpaste whose sole message was that it wasn’t the leading brand. The only way anyone would become passionate about this alternative is if its competitor were successfully portrayed as ineffective or, worse still, poisonous. Realistically, though, in advanced consumer societies like the United States, the number of people who might be convinced that their choice of toothpaste is a life-or-death matter is not great. Most of them know full well that the only substantive difference between mainstream brands is a matter of aesthetics, how they look or taste.
Perhaps it’s unfair to imply that the two major political parties in the United States are basically Crest and Colgate. There are some hot-button issues where the Republican and Democratic line diverge enough to result in real-world consequences: reproductive health, collective bargaining, environmental safeguards. Yet there can be no denying that more and more Americans have concluded that both parties are far more interested in self-preservation than trying to address the nation’s most pressing problems.
That’s why their approval ratings have fallen to such an abysmal level. To most middle-of-the-road visitors, the only thing worse than the party out of office is the party in office. And that’s the primary reason why the Democratic Party was bound to lose ground this election cycle regardless of its choice of strategy: holding the White House and a slim majority in the Senate was sufficient to make it the target of the electorate’s disaffection.
by Charlie Bertsch, Souciant | Read more:
Image: uncredited
The 36 People Who Run Wikipedia
And yet it not only exists, it almost is the Web: Wikipedia is the sixth most popular website in the world, with 22.5 million contributors and 736 million edits in English alone. It’s as if the entire population of Australia (23.6 million) each contributed 30 times. Last year Wikimedia sites overall (which includes the likes of Wikiquote and Wiktionary, as well as Wikipedia itself) averaged 20 billion pageviews per month.
This paradox of its success is most striking at the top of the Wikimedia food chain. Running this huge enterprise is a little-known hierarchy of volunteer leaders, effectively each working an extra part-time job to police the site, battle vandals, seek out spammers and sock puppets, and clean and control what you see. Thousands of people around the world actuallyapply to do more work for free as a Wikimedia administrator, autopatroller, rollbacker, or bureaucrat.
But at the very top of this tree are 36 users who demonstrate Wikimedia in its most concentrated form: the stewards. They wield “global rights” — the ability to edit anything — and respond to crises and controversies across all Wiki platforms. They come from all around the world, receive no compensation, and rarely, if ever, encounter each other offline. You definitely don’t know them — but their work is essential to understanding how Wikimedia’s unique existence has thrived.
by Stephen Lurie, Medium | Read more:
Image: Pablo Delcan
Labels:
Critical Thought,
Journalism,
Media,
Technology
The Art of Not Working at Work
[ed. See also: The Two-Factor Theory, also known as Herzberg's motivation-hygiene theory]
Two years ago a civil servant in the German town of Menden wrote a farewell message to his colleagues on the day of his retirement stating that he had not done anything for 14 years. “Since 1998,” he wrote, “I was present but not really there. So I’m going to be well prepared for retirement—Adieu.” The e-mail was leaked to Germany's Westfalen-Post and quickly became world news. The public work ethic had been wounded and in the days that followed the mayor of Menden lamented the incident, saying he “felt a good dose of rage.”
The municipality of Menden sent out a press release regretting that the employee never informed his superiors of his inactivity. In a lesser-known interview with the German newspaper Bild a month later, the former employee responded that his e-mail had been misconstrued. He had not been avoiding work for 14 years; as his department grew, his assignments were simply handed over to others. “There never was any frustration on my part, and I would have written the e-mail even today. I have always offered my services, but it’s not my problem if they don’t want them,” he said.
The story of this German bureaucrat raised some questions about modern-day slacking. Does having a job necessarily entail work? If not, how and why does a job lose its substance? And what can be done to make employees less lazy—or is that even the right question to ask in a system that’s set up in the way that ours is? After talking to 40 dedicated loafers, I think I can take a stab at some answers.
Most work sociologists tend toward the view that non-work at work is a marginal, if not negligible, phenomenon. What all statistics point towards is a general intensification of work with more and more burnouts and other stress syndromes troubling us. Yet there are more-detailed surveys reporting that the average time spent on private activities at work is between 1.5 and three hours a day. By measuring the flows of audiences for certain websites, it has also been observed that, by the turn of the century,70 percent of the U.S. internet traffic passing through pornographic sites did so during working hours, and that 60 percent of all online purchases were made between 9 a.m. and 5 p.m. What is sometimes called “cyberloafing” has, furthermore, not only been observed in the U.S. (in which most work-time surveys are conducted), but also in nations such as Singapore, Germany, and Finland.
Even if the percentage of workers who claim they are working at the pinnacle of their capacity all the time is slowly increasing, the majority still remains unaffected. In fact, the proportion of people who say they never work hard has long been far greater than those who say they always do. The articles and books about the stressed-out fraction of humanity can be counted in the thousands, but why has so little been written about this opposite extreme? (...)
In The Living Dead, David Bolchover rues “the dominance of image over reality, of obfuscation over clarity, of politics over performance,” and in City Slackers, Steve McKevitt, a disillusioned “business and communications expert,” gloomily declares: “In a society where presentation is everything, it’s no longer about what you do, it’s about how you look like you’re doing it.”
by Roland Paulsen, Atlantic | Read more:
Image: Lauren Giordano

The municipality of Menden sent out a press release regretting that the employee never informed his superiors of his inactivity. In a lesser-known interview with the German newspaper Bild a month later, the former employee responded that his e-mail had been misconstrued. He had not been avoiding work for 14 years; as his department grew, his assignments were simply handed over to others. “There never was any frustration on my part, and I would have written the e-mail even today. I have always offered my services, but it’s not my problem if they don’t want them,” he said.
The story of this German bureaucrat raised some questions about modern-day slacking. Does having a job necessarily entail work? If not, how and why does a job lose its substance? And what can be done to make employees less lazy—or is that even the right question to ask in a system that’s set up in the way that ours is? After talking to 40 dedicated loafers, I think I can take a stab at some answers.
Most work sociologists tend toward the view that non-work at work is a marginal, if not negligible, phenomenon. What all statistics point towards is a general intensification of work with more and more burnouts and other stress syndromes troubling us. Yet there are more-detailed surveys reporting that the average time spent on private activities at work is between 1.5 and three hours a day. By measuring the flows of audiences for certain websites, it has also been observed that, by the turn of the century,70 percent of the U.S. internet traffic passing through pornographic sites did so during working hours, and that 60 percent of all online purchases were made between 9 a.m. and 5 p.m. What is sometimes called “cyberloafing” has, furthermore, not only been observed in the U.S. (in which most work-time surveys are conducted), but also in nations such as Singapore, Germany, and Finland.
Even if the percentage of workers who claim they are working at the pinnacle of their capacity all the time is slowly increasing, the majority still remains unaffected. In fact, the proportion of people who say they never work hard has long been far greater than those who say they always do. The articles and books about the stressed-out fraction of humanity can be counted in the thousands, but why has so little been written about this opposite extreme? (...)
In The Living Dead, David Bolchover rues “the dominance of image over reality, of obfuscation over clarity, of politics over performance,” and in City Slackers, Steve McKevitt, a disillusioned “business and communications expert,” gloomily declares: “In a society where presentation is everything, it’s no longer about what you do, it’s about how you look like you’re doing it.”
by Roland Paulsen, Atlantic | Read more:
Wednesday, November 5, 2014
Read It and Reap

Modern Farmer appeared in the spring of 2013. After three issues, it won a National Magazine Award; no other magazine had ever won so quickly. According to Gardner, though, Modern Farmer is less a magazine than an emblem of “an international life-style brand.” This is the life style of people who want to “eat food with a better backstory”—from slaughterhouses that follow humane practices, and from farmers who farm clean and treat their workers decently. Also, food cultists who like obscure foods and believe that fruits and vegetables taste different depending on where they are grown. Also, aspirational farmers, hobby farmers, intern farmers, student farmers,WWOOFers—people who take part in programs sponsored by the World Wide Opportunities on Organic Farms movement—and people who stay at hotels on farms where they eat things grown by the owners. Plus idlers in cubicles searching for cheap farmland and chicken fences and what kind of goats give the best milk. Such people “have a foot in each world, rural and urban,” Gardner says. She calls them Rurbanistas, a term she started using after hearing the Spanish word rurbanismo, which describes the migration from the city to the countryside. Rurbanistas typify the Modern Farmer audience. (...)
‘Magazine’ is a word I learned very early on not to use,” Gardner said. “Investors didn’t want to go near you. I would say I was trying to make a brand around the idea of modern farming. My friends would say, ‘I don’t get this, it has nothing to do with your life.’ ” Photographers began to send pictures of distressed trucks and falling-down barns. “ ‘Here’s a photo shoot we could do,’ they would say, and I was, ‘No, we’re not doing that photo shoot, ever.’ Everybody also wanted to shoot tables in fields. And another one they thought was so clever was a farmer in a field with a laptop.”
Warren Street is lined with stores selling vintage furniture. Gardner stopped in front of a window with a display of metal lawn chairs. “I am so looking forward to spending money again,” she said. “I’m still in startup mode. We are really close to being super successful financially, and I can’t wait. The one thing everybody told me was ‘You guys are never going to sell a magazine with an animal on the cover,’ and I’m so happy that’s not true.” (...)
Including people who sell ads, Gardner has a staff of eight. She owns a small portion of Modern Farmer. The majority belongs to a Canadian investor named Frank Giustra, to whom Gardner was introduced by someone she knows in Vancouver. Gardner would not disclose the amount that Giustra put in, but among the investors that she courted she was known to be seeking two to three million dollars. In an ideal angel investor situation, Giustra might have received twenty to thirty per cent of the company. A lawyer later told Gardner that she had signed one of the three worst deals he had ever seen. (...)
Sometime after the first issue, however, during the summer of 2013, it became apparent that Gardner had likely overestimated the first year’s revenues and that the magazine would eventually need more money. Giustra apparently hadn’t expected to contribute more than he had already, and, Gardner said, he told her that she should find another investor. (Giustra declined to comment.) In May, 2014, after the National Magazine Award, Giustra said that he would pay for one more issue—the one to be prepared over the summer and published in September. When July arrived without Gardner’s having found someone else to put money in, Giustra told her that he would invest more only if she gave him a portion of her shares, an arrangement that is customary in such circumstances. However, he proposed additional terms that Gardner regarded as inequitable. Meanwhile, not knowing how much longer Modern Farmer would last, some of her staff began looking for other work.
Gardner must overcome two obstacles to find new investors. One is that Giustra owns too much of the company. “In venture capital, usually you have several investors, no one of whom owns more than fifty per cent of the company, and they all share an idea of the future,” Kevin Powers, the controller and finance director of the company Vox Media, told me. Powers is also a member of Modern Farmer’s advisory board. Second, an investor would wonder why Giustra was behaving as if he wanted to sell. According to Sam Holdsworth, an investment banker who raises money for early stage media and entertainment companies and who has started several magazines, “When the principal investor tries to leave early, it makes you wonder why.”
by Alec Wilkinson, New Yorker | Read more:
Image: Chris Buck
Tuesday, November 4, 2014
Lobbyists, Bearing Gifts, Pursue Attorneys General
When the executives who distribute 5-Hour Energy, the popular caffeinated drinks, learned that attorneys general in more than 30 states were investigating allegations of deceptive advertising — a serious financial threat to the company — they moved quickly to shut the investigations down, one state at a time.
But success did not come in court or at a negotiating table.
Instead, it came at the opulent Loews Santa Monica Beach Hotel in California, with its panoramic ocean views, where more than a dozen state attorneys general had gathered last year for cocktails, dinners and fund-raisers organized by the Democratic Attorneys General Association. A lawyer for 5-Hour Energy roamed the event, setting her sights on Attorney General Chris Koster of Missouri, whose office was one of those investigating the company.
“My client just received notification that Missouri is on this,” the lawyer, Lori Kalani, told him.
Ms. Kalani’s firm, Dickstein Shapiro, had courted the attorney general at dinners and conferences and with thousands of dollars in campaign contributions. Mr. Koster told Ms. Kalani that he was unaware of the investigation, and he reached for his phone and called his office. By the end of the weekend, he had ordered his staff to pull out of the inquiry, a clear victory for 5-Hour Energy.
The quick reversal, confirmed by Mr. Koster and Ms. Kalani, was part of a pattern of successful lobbying of Mr. Koster by the law firm on behalf of clients like Pfizer and AT&T — and evidence of a largely hidden dynamic at work in state attorneys general offices across the country.
Attorneys general are now the object of aggressive pursuit by lobbyists and lawyers who use campaign contributions, personal appeals at lavish corporate-sponsored conferences and other means to push them to drop investigations, change policies, negotiate favorable settlements or pressure federal regulators, an investigation by The New York Times has found.
But unlike the lobbying rules covering other elected officials, there are few revolving-door restrictions or disclosure requirements governing state attorneys general, who serve as “the people’s lawyers” by protecting consumers and individual citizens.
A result is that the routine lobbying and deal-making occur largely out of view. But the extent of the cause and effect is laid bare in The Times’s review of more than 6,000 emails obtained through open records laws in more than two dozen states, interviews with dozens of participants in cases and attendance at several conferences where corporate representatives had easy access to attorneys general.
But success did not come in court or at a negotiating table.

“My client just received notification that Missouri is on this,” the lawyer, Lori Kalani, told him.
Ms. Kalani’s firm, Dickstein Shapiro, had courted the attorney general at dinners and conferences and with thousands of dollars in campaign contributions. Mr. Koster told Ms. Kalani that he was unaware of the investigation, and he reached for his phone and called his office. By the end of the weekend, he had ordered his staff to pull out of the inquiry, a clear victory for 5-Hour Energy.
The quick reversal, confirmed by Mr. Koster and Ms. Kalani, was part of a pattern of successful lobbying of Mr. Koster by the law firm on behalf of clients like Pfizer and AT&T — and evidence of a largely hidden dynamic at work in state attorneys general offices across the country.
Attorneys general are now the object of aggressive pursuit by lobbyists and lawyers who use campaign contributions, personal appeals at lavish corporate-sponsored conferences and other means to push them to drop investigations, change policies, negotiate favorable settlements or pressure federal regulators, an investigation by The New York Times has found.
But unlike the lobbying rules covering other elected officials, there are few revolving-door restrictions or disclosure requirements governing state attorneys general, who serve as “the people’s lawyers” by protecting consumers and individual citizens.
A result is that the routine lobbying and deal-making occur largely out of view. But the extent of the cause and effect is laid bare in The Times’s review of more than 6,000 emails obtained through open records laws in more than two dozen states, interviews with dozens of participants in cases and attendance at several conferences where corporate representatives had easy access to attorneys general.
by Eric Lipton, NY Times | Read more:
Image: Alex Wong/Getty ImagesWhy Innocent People Plead Guilty
The criminal justice system in the United States today bears little relationship to what the Founding Fathers contemplated, what the movies and television portray, or what the average American believes.
To the Founding Fathers, the critical element in the system was the jury trial, which served not only as a truth-seeking mechanism and a means of achieving fairness, but also as a shield against tyranny. As Thomas Jefferson famously said, “I consider [trial by jury] as the only anchor ever yet imagined by man, by which a government can be held to the principles of its constitution.”
The Sixth Amendment guarantees that “in all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury.” The Constitution further guarantees that at the trial, the accused will have the assistance of counsel, who can confront and cross-examine his accusers and present evidence on the accused’s behalf. He may be convicted only if an impartial jury of his peers is unanimously of the view that he is guilty beyond a reasonable doubt and so states, publicly, in its verdict.
The drama inherent in these guarantees is regularly portrayed in movies and television programs as an open battle played out in public before a judge and jury. But this is all a mirage. In actuality, our criminal justice system is almost exclusively a system of plea bargaining, negotiated behind closed doors and with no judicial oversight. The outcome is very largely determined by the prosecutor alone.
In 2013, while 8 percent of all federal criminal charges were dismissed (either because of a mistake in fact or law or because the defendant had decided to cooperate), more than 97 percent of the remainder were resolved through plea bargains, and fewer than 3 percent went to trial. The plea bargains largely determined the sentences imposed.
While corresponding statistics for the fifty states combined are not available, it is a rare state where plea bargains do not similarly account for the resolution of at least 95 percent of the felony cases that are not dismissed; and again, the plea bargains usually determine the sentences, sometimes as a matter of law and otherwise as a matter of practice. Furthermore, in both the state and federal systems, the power to determine the terms of the plea bargain is, as a practical matter, lodged largely in the prosecutor, with the defense counsel having little say and the judge even less.
It was not always so. Until roughly the end of the Civil War, plea bargains were exceedingly rare. A criminal defendant would either go to trial or confess and plead guilty. If the defendant was convicted, the judge would have wide discretion to impose sentence; and that decision, made with little input from the parties, was subject only to the most modest appellate review.
After the Civil War, this began to change, chiefly because, as a result of the disruptions and dislocations that followed the war, as well as greatly increased immigration, crime rates rose considerably, and a way had to be found to dispose of cases without imposing an impossible burden on the criminal justice system. Plea bargains offered a way out: by pleading guilty to lesser charges in return for dismissal of the more serious charges, defendants could reduce their prison time, while the prosecution could resolve the case without burdening the system with more trials.
The practice of plea bargaining never really took hold in most other countries, where it was viewed as a kind of “devil’s pact” that allowed guilty defendants to avoid the full force of the law. But in the United States it became commonplace. And while the Supreme Court initially expressed reservations about the system of plea bargaining, eventually the Court came to approve of it, as an exercise in contractual negotiation between independent agents (the prosecutor and the defense counsel) that was helpful in making the system work. Similarly, academics, though somewhat bothered by the reduced role of judges, came to approve of plea bargaining as a system somewhat akin to a regulatory regime.
by Jed S. Rakoff, NY Review of Books | Read more:
Image: Honoré Daumier: A Criminal Case

The Sixth Amendment guarantees that “in all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury.” The Constitution further guarantees that at the trial, the accused will have the assistance of counsel, who can confront and cross-examine his accusers and present evidence on the accused’s behalf. He may be convicted only if an impartial jury of his peers is unanimously of the view that he is guilty beyond a reasonable doubt and so states, publicly, in its verdict.
The drama inherent in these guarantees is regularly portrayed in movies and television programs as an open battle played out in public before a judge and jury. But this is all a mirage. In actuality, our criminal justice system is almost exclusively a system of plea bargaining, negotiated behind closed doors and with no judicial oversight. The outcome is very largely determined by the prosecutor alone.
In 2013, while 8 percent of all federal criminal charges were dismissed (either because of a mistake in fact or law or because the defendant had decided to cooperate), more than 97 percent of the remainder were resolved through plea bargains, and fewer than 3 percent went to trial. The plea bargains largely determined the sentences imposed.
While corresponding statistics for the fifty states combined are not available, it is a rare state where plea bargains do not similarly account for the resolution of at least 95 percent of the felony cases that are not dismissed; and again, the plea bargains usually determine the sentences, sometimes as a matter of law and otherwise as a matter of practice. Furthermore, in both the state and federal systems, the power to determine the terms of the plea bargain is, as a practical matter, lodged largely in the prosecutor, with the defense counsel having little say and the judge even less.
It was not always so. Until roughly the end of the Civil War, plea bargains were exceedingly rare. A criminal defendant would either go to trial or confess and plead guilty. If the defendant was convicted, the judge would have wide discretion to impose sentence; and that decision, made with little input from the parties, was subject only to the most modest appellate review.
After the Civil War, this began to change, chiefly because, as a result of the disruptions and dislocations that followed the war, as well as greatly increased immigration, crime rates rose considerably, and a way had to be found to dispose of cases without imposing an impossible burden on the criminal justice system. Plea bargains offered a way out: by pleading guilty to lesser charges in return for dismissal of the more serious charges, defendants could reduce their prison time, while the prosecution could resolve the case without burdening the system with more trials.
The practice of plea bargaining never really took hold in most other countries, where it was viewed as a kind of “devil’s pact” that allowed guilty defendants to avoid the full force of the law. But in the United States it became commonplace. And while the Supreme Court initially expressed reservations about the system of plea bargaining, eventually the Court came to approve of it, as an exercise in contractual negotiation between independent agents (the prosecutor and the defense counsel) that was helpful in making the system work. Similarly, academics, though somewhat bothered by the reduced role of judges, came to approve of plea bargaining as a system somewhat akin to a regulatory regime.
by Jed S. Rakoff, NY Review of Books | Read more:
Image: Honoré Daumier: A Criminal Case
A Plutocratic Proposal
[ed. Interesting, not only for the proposal, but the ethics and procedures involved in getting a new drug to market.]
“But you have missed the bigger idea!” exclaimed Peter Lanciano, grabbing the pepper grinder and banging it on the table. “The problem isn’t how to get my drug into Mr Pepperpot. The problem is how to protect me from being sued if Mr Pepperpot dies.”
It had taken me two years to track Lanciano down. For this meeting I’d broken off my holiday, woken up at three in the morning and flown 1,000 miles across Europe to have breakfast at a London hotel built like a penitentiary. Lanciano is the Executive Director of a small US drug company. In his early 50s, with a Teddy Roosevelt moustache and a lumberjack shirt stretched tight across his broad chest, I believe he can help solve a niggling problem that holds back medical research around the world and makes patients suffer. Every year, an untold number of potential new drugs or interventions, any one of which might go on to improve thousands of lives, are thrown away without being tested in humans. It is a matter of funding, not science: there is not enough money in the public or private sector to run clinical trials on every exciting proposal that comes out of research labs. Thoughtful but hurried (and often arbitrary) judgements are therefore made about which products to save – and the rest of these potentially life-saving therapies are ditched. “There’s tons of promising stuff out there,” says David Stojdl, cofounder of the Californian biotech company Jennerex Biotherapeutics, “and it is dying on the vine.”
I have a simple proposal for a way to rescue this waste. I’m not a scientist or a physician; I have no medical training. I’m a biographer and an illustrator, and until a couple of years ago I’d never heard of clinical trials. But I know my idea works because I’ve already tried it once, to rescue a promising anticancer therapeutic that was about to be thrown out in Sweden. The general version of my proposal has now received backing from a select group of university research departments and a clutch of experts on medical ethics, and has the interest of one of the world’s largest law firms specialising in the life sciences. If the scheme can be made to work on a larger scale, it will open up the possibility of millions (I think, billions) of pounds of extra money for clinical trials, especially for rare and difficult-to-treat diseases – the ones that traditional funders are reluctant to support. (...)
I began thinking again about the fundraising we’d done. Why not extend the principle of selling trial places, to raise money for other Uppsalas and other diseases: not just neuroendocrine cancer, or just cancer, but any illness? There are over 12 million millionaires in the world – any one of these would want to buy a place on a trial if it might purchase relief or stave off death. Every one of them has people they love for whom they’d pay good money to get an extra chance. Why not set up a charitable or private body that would arrange these ‘sales’?
My first thought was that it would be run like a dating agency. (...)
“But you have missed the bigger idea!” exclaimed Peter Lanciano, grabbing the pepper grinder and banging it on the table. “The problem isn’t how to get my drug into Mr Pepperpot. The problem is how to protect me from being sued if Mr Pepperpot dies.”

I have a simple proposal for a way to rescue this waste. I’m not a scientist or a physician; I have no medical training. I’m a biographer and an illustrator, and until a couple of years ago I’d never heard of clinical trials. But I know my idea works because I’ve already tried it once, to rescue a promising anticancer therapeutic that was about to be thrown out in Sweden. The general version of my proposal has now received backing from a select group of university research departments and a clutch of experts on medical ethics, and has the interest of one of the world’s largest law firms specialising in the life sciences. If the scheme can be made to work on a larger scale, it will open up the possibility of millions (I think, billions) of pounds of extra money for clinical trials, especially for rare and difficult-to-treat diseases – the ones that traditional funders are reluctant to support. (...)
I began thinking again about the fundraising we’d done. Why not extend the principle of selling trial places, to raise money for other Uppsalas and other diseases: not just neuroendocrine cancer, or just cancer, but any illness? There are over 12 million millionaires in the world – any one of these would want to buy a place on a trial if it might purchase relief or stave off death. Every one of them has people they love for whom they’d pay good money to get an extra chance. Why not set up a charitable or private body that would arrange these ‘sales’?
My first thought was that it would be run like a dating agency. (...)
Wealthy people financing clinical trials is not new: this type of private funding already exists all over the world if all you want to do is make money. As Savulescu said, “If I were a venture capitalist, I could invest millions of dollars in funding the development of a drug, hoping to make hundreds of millions of dollars if it’s successful. So why shouldn’t I be able to pay the same money for the same development, to have a chance of saving my life? It is completely ludicrous.” (...)
In medical parlance, there are typically three phases of trials a drug has to pass before it can be sold commercially. Phase I tests the tolerable dose range and safety in healthy volunteers or, in the case of serious diseases such as cancer, in sick patients. With a few exceptions, the doses used are too small to offer medical benefit. In phase II the research team tests efficacy, and uses the information from phase I to provide potentially therapeutic treatment at the optimal safe dose. This second phase of trialling can be divided into two parts: IIa, which is open to all suitable patients and has no placebo wing, and IIb, in which placebos and randomisation are introduced. Phase III tests whether the drug is better than the best already available – this is abominably expensive, involves hundreds of people and is not worth thinking about unless you’re a multibillionaire.
O’Connor’s objection therefore restricted the Dating Agency to brokering phase I or phase IIa trials. That’s not terrible. That’s almost all it was intended to do anyway: get promising preclinical research over the hurdle into early-stage clinical trials, because that comparatively small amount of money is way beyond the reach of ordinary university departments. Uppsala is a combined phase I/phase IIa trial.
O’Connor’s next objection had not occurred to me at all: “What happens if the drug works?”
In medical parlance, there are typically three phases of trials a drug has to pass before it can be sold commercially. Phase I tests the tolerable dose range and safety in healthy volunteers or, in the case of serious diseases such as cancer, in sick patients. With a few exceptions, the doses used are too small to offer medical benefit. In phase II the research team tests efficacy, and uses the information from phase I to provide potentially therapeutic treatment at the optimal safe dose. This second phase of trialling can be divided into two parts: IIa, which is open to all suitable patients and has no placebo wing, and IIb, in which placebos and randomisation are introduced. Phase III tests whether the drug is better than the best already available – this is abominably expensive, involves hundreds of people and is not worth thinking about unless you’re a multibillionaire.
O’Connor’s objection therefore restricted the Dating Agency to brokering phase I or phase IIa trials. That’s not terrible. That’s almost all it was intended to do anyway: get promising preclinical research over the hurdle into early-stage clinical trials, because that comparatively small amount of money is way beyond the reach of ordinary university departments. Uppsala is a combined phase I/phase IIa trial.
O’Connor’s next objection had not occurred to me at all: “What happens if the drug works?”
by Alexander Masters, Mosaic | Read more:
Image: Jean Jullien at Handsome FrankMonday, November 3, 2014
Three Breakthroughs That Have Finally Unleashed AI on the World
A few months ago I made the trek to the sylvan campus of the IBM research labs in Yorktown Heights, New York, to catch an early glimpse of the fast-arriving, long-overdue future of artificial intelligence. This was the home of Watson, the electronic genius that conquered Jeopardy! in 2011. The original Watson is still here—it's about the size of a bedroom, with 10 upright, refrigerator-shaped machines forming the four walls. The tiny interior cavity gives technicians access to the jumble of wires and cables on the machines' backs. It is surprisingly warm inside, as if the cluster were alive.
Today's Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred “instances” of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers. This kind of AI can be scaled up or down on demand. Because AI improves as people use it, Watson is always getting smarter; anything it learns in one instance can be immediately transferred to the others. And instead of one single program, it's an aggregation of diverse software engines—its logic-deduction engine and its language-parsing engine might operate on different code, on different chips, in different locations—all cleverly integrated into a unified stream of intelligence.
Consumers can tap into that always-on intelligence directly, but also through third-party apps that harness the power of this AI cloud. Like many parents of a bright mind, IBM would like Watson to pursue a medical career, so it should come as no surprise that one of the apps under development is a medical-diagnosis tool. Most of the previous attempts to make a diagnostic AI have been pathetic failures, but Watson really works. When, in plain English, I give it the symptoms of a disease I once contracted in India, it gives me a list of hunches, ranked from most to least probable. The most likely cause, it declares, is Giardia—the correct answer. This expertise isn't yet available to patients directly; IBM provides access to Watson's intelligence to partners, helping them develop user-friendly interfaces for subscribing doctors and hospitals. “I believe something like Watson will soon be the world's best diagnostician—whether machine or human,” says Alan Greene, chief medical officer of Scanadu, a startup that is building a diagnostic device inspired by the Star Trek medical tricorder and powered by a cloud AI. “At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult.”
Medicine is only the beginning. All the major cloud companies, plus dozens of startups, are in a mad rush to launch a Watson-like cognitive service. According to quantitative analysis firm Quid, AI has attracted more than $17 billion in investments since 2009. Last year alone more than $2 billion was invested in 322 companies with AI-like technology. Facebook and Google have recruited researchers to join their in-house AI research teams. Yahoo, Intel, Dropbox, LinkedIn, Pinterest, and Twitter have all purchased AI companies since last year. Private investment in the AI sector has been expanding 62 percent a year on average for the past four years, a rate that is expected to continue.
Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it's here.

Consumers can tap into that always-on intelligence directly, but also through third-party apps that harness the power of this AI cloud. Like many parents of a bright mind, IBM would like Watson to pursue a medical career, so it should come as no surprise that one of the apps under development is a medical-diagnosis tool. Most of the previous attempts to make a diagnostic AI have been pathetic failures, but Watson really works. When, in plain English, I give it the symptoms of a disease I once contracted in India, it gives me a list of hunches, ranked from most to least probable. The most likely cause, it declares, is Giardia—the correct answer. This expertise isn't yet available to patients directly; IBM provides access to Watson's intelligence to partners, helping them develop user-friendly interfaces for subscribing doctors and hospitals. “I believe something like Watson will soon be the world's best diagnostician—whether machine or human,” says Alan Greene, chief medical officer of Scanadu, a startup that is building a diagnostic device inspired by the Star Trek medical tricorder and powered by a cloud AI. “At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult.”
Medicine is only the beginning. All the major cloud companies, plus dozens of startups, are in a mad rush to launch a Watson-like cognitive service. According to quantitative analysis firm Quid, AI has attracted more than $17 billion in investments since 2009. Last year alone more than $2 billion was invested in 322 companies with AI-like technology. Facebook and Google have recruited researchers to join their in-house AI research teams. Yahoo, Intel, Dropbox, LinkedIn, Pinterest, and Twitter have all purchased AI companies since last year. Private investment in the AI sector has been expanding 62 percent a year on average for the past four years, a rate that is expected to continue.
Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it's here.
by Kevin Kelly, Wired | Read more:
Image: MIT News
Subscribe to:
Posts (Atom)