Sunday, June 30, 2019
The Beautiful and the Crapified
Last week’s announcement of the departure of Apple chief design officer Jony Ive marks the end of an era: the last connection to the Apple of Steve Jobs.
Now, no one would deny that Ive created beautiful objects.
As iFixit notes:
Ironically. both Jobs and Ive were inspired by Dieter Rams – whom iFixit calls “the legendary industrial designer renowned for functional and simple consumer products.” And unlike Apple. Rams believed that good design didn’t have to come at the expense of either durability or the environment:
In fact, that complete design flexibility – at least as practiced by Ive – has resulted in crapified products that are an environmental disaster. Their lack of durability means they must be repaired to be functional, and the lack of repairability means many of these products end up being tossed prematurely – no doubt not a bug, but a feature. As Vice recounts:
Right to Repair
As I’ve written before, Apple is leading opponent of giving consumers a right to repair. Nonetheless, there’s been some global progress on this issue (see Global Gains on Right to Repair). And we’ve also seen a widening of support in the US for such a right. The issue has arisen in the current presidential campaign, with Elizabeth Warren throwing down the gauntlet by endorsing a right to repair for farm tractors. The New York Times has also taken up the cause more generally (see Right to Repair Initiatives Gain Support in US). More than twenty states are considering enacting right to repair statutes.
This stirring of support has led Apple to increase its lobbying efforts, deploying increasingly specious arguments – such as these recently offered to California legislators: consumers will hurt themselves if provided a right to repair, and such a change would empower hackers (see Apple to California Legislators: Consumers Will Hurt Themselves if Provided a Right to Repair). Rather than seeing these arguments derided and rejected, the lobbying succeeded, leading in April to cancellation of a hearing on then-pending California legislation, which now cannot move forward until 2020 at the earliest. Other state initiatives remain pending.
by Jerri-Lynn Scofield, Naked Capitalism | Read more:
Image: uncredited via
Now, no one would deny that Ive created beautiful objects.
As iFixit notes:
The iPod, the iPhone, the MacBook Air, the physical Apple Store, even the iconic packaging of Apple products—these products changed how we view and use their categories, or created new categories, and will be with us a long time.But the title of that iFixit post, Jony Ive’s Fragmented Legacy: Unreliable, Unrepairable, Beautiful Gadgets, makes clear that those beautiful products carried with them considerable costs- above and beyond their high prices. They’re unreliable, and difficult to repair.
Ironically. both Jobs and Ive were inspired by Dieter Rams – whom iFixit calls “the legendary industrial designer renowned for functional and simple consumer products.” And unlike Apple. Rams believed that good design didn’t have to come at the expense of either durability or the environment:
Rams loves durable products that are environmentally friendly. That’s one of his 10 principles for good design: “Design makes an important contribution to the preservation of the environment.” But Ive has never publicly discussed the dissonance between his inspiration and Apple’s disposable, glued-together products. For years, Apple has openly combated green standards that would make products easier to repair and recycle, stating that they need “complete design flexibility” no matter the impact on the environment.Complete Design Flexibility Spells Environmental Disaster
In fact, that complete design flexibility – at least as practiced by Ive – has resulted in crapified products that are an environmental disaster. Their lack of durability means they must be repaired to be functional, and the lack of repairability means many of these products end up being tossed prematurely – no doubt not a bug, but a feature. As Vice recounts:
But history will not be kind to Ive, to Apple, or to their design choices. While the company popularized the smartphone and minimalistic, sleek, gadget design, it also did things like create brand new screws designed to keep consumers from repairing their iPhones.
Under Ive, Apple began gluing down batteries inside laptops and smartphones (rather than screwing them down) to shave off a fraction of a millimeter at the expense of repairability and sustainability.
It redesigned MacBook Pro keyboards with mechanisms that are, again, a fraction of a millimeter thinner, but that are easily defeated by dust and crumbs (the computer I am typing on right now—which is six months old—has a busted spacebar and ‘r’ key). These keyboards are not easily repairable, even by Apple, and many MacBook Pros have to be completely replaced due to a single key breaking. The iPhone 6 Plus had a design flaw that led to its touch screen spontaneously breaking—it then told consumers there was no problem for months before ultimately creating a repair program. Meanwhile, Apple’s own internal tests showed those flaws. He designed AirPods, which feature an unreplaceable battery that must be physically destroyed in order to open.Vice also notes that in addition to Apple’s products becoming “less modular, less consumer friendly, less upgradable, less repairable, and, at times, less functional than earlier models”, Apple’s design decisions have not been confined to Apple. Instead, “Ive’s influence is obvious in products released by Samsung, HTC, Huawei, and others, which have similarly traded modularity for sleekness.”
Right to Repair
As I’ve written before, Apple is leading opponent of giving consumers a right to repair. Nonetheless, there’s been some global progress on this issue (see Global Gains on Right to Repair). And we’ve also seen a widening of support in the US for such a right. The issue has arisen in the current presidential campaign, with Elizabeth Warren throwing down the gauntlet by endorsing a right to repair for farm tractors. The New York Times has also taken up the cause more generally (see Right to Repair Initiatives Gain Support in US). More than twenty states are considering enacting right to repair statutes.
This stirring of support has led Apple to increase its lobbying efforts, deploying increasingly specious arguments – such as these recently offered to California legislators: consumers will hurt themselves if provided a right to repair, and such a change would empower hackers (see Apple to California Legislators: Consumers Will Hurt Themselves if Provided a Right to Repair). Rather than seeing these arguments derided and rejected, the lobbying succeeded, leading in April to cancellation of a hearing on then-pending California legislation, which now cannot move forward until 2020 at the earliest. Other state initiatives remain pending.
by Jerri-Lynn Scofield, Naked Capitalism | Read more:
Image: uncredited via
Saturday, June 29, 2019
A History of the Bible by John Barton
Tiptoeing through a minefield.
A quiz question, which is also a trick question: how many references to the doctrine of the Trinity are there in the Bible? The answer: two, at a pinch. One of them was probably inserted into the text of the Gospel of John by a zealous scribe well after the gospel was written. This is known as “the Johannine comma” (where comma means “clause” or “phrase”). The other (in Matthew) was also probably a later addition by a pious scribe.
As John Barton shows in this massive and fascinating book, the Bible really did have a history. It grew and developed. As its disparate books were gradually integrated into the theological structures of the church, scribes would engage in what is called “the orthodox corruption of scripture”. So once the notion that God the Father, the Son and the Holy Spirit were all equal persons of the Trinity was established it became natural to seek confirmation of that doctrine in the Bible.
The Epistles of St Paul were probably written not long after the death of Christ, in the AD40s or 50s. St Paul appears to have been an “adoptionist” who held that Jesus was adopted as Son of God at the resurrection rather than a believer in the Trinity.
The gospels (which show knowledge of the fall of the Jerusalem Temple in AD70) were written at least two decades after Paul’s epistles. And the Gospel of John was possibly written as late as the second century. It presents a Jesus who talks a great deal about his own status as God’s son. This more likely reflects the beliefs of a later era than that of Jesus himself, and John’s gospel may indeed be a biography of Christ written to suit the interests and beliefs of John’s own particular branch of Christianity. The episode of the woman taken in adultery – “He that is without sin among you, let him first cast a stone at her” – which appears only in this gospel, is not found in the earliest manuscripts, and is likely to be an even later addition.
Does this mean that Barton’s history of the Bible provides an armoury of arguments for religious sceptics? Well, the sceptical will certainly find material here to deploy. But Barton – who is an Anglican with Lutheran leanings – believes that it’s perfectly possible to see the Bible as a book with its own history and also to regard it as a repository of religious truths.
He views the New Testament as a collection of records written by different people, probably for different religious communities, at different times. The gospels were preserved not in scrolls but in codices – bound volumes with separate leaves – and were “not fixed Scripture but simply the reminiscences of the Apostles”. That explains why they can be internally inconsistent, but also how they can be thought of as texts that give a range of different angles on the life of Christ, even if they don’t all relate (in that common phrase) the gospel truth.
Barton opposes Dan Brown-style conspiracy theorists who think that some time in the fourth century a powerful church suppressed a range of heterodox scriptures and created the New Testament as we now know it. He argues convincingly that by the second century there was a loose canon of holy books that were broadly similar to those included in the Bible today.
Although Barton is a Christian he’s also an excellent guide to the composition of what is usually called the “Old Testament” – though, as he reminds us, that name implies that the Hebrew Bible (as he prefers to call it) is no more than a precursor to the New Testament. Early Christian thinkers saw it this way. They regarded the life of Christ as the great truth towards which the Hebrew prophets and scriptures pointed, and which superseded the old faith and its laws. They read the Hebrew Bible as a story of disobedience and falling: Adam and Eve fell, and then Christ reversed the effects of that fall. That could go along with hostility to Jewish beliefs, and even antisemitism. For the majority of Jews, however, the Hebrew Bible was “not at all about fall and redemption, but about how to live a faithful life in the ups and downs of the ongoing history of the people of Israel”.
The Hebrew Bible itself developed over a long period, probably from about the eighth to the second century BC. Barton suggests that the Book of Proverbs may well have been produced by something like Israel’s civil service. Job and Ecclesiastes are much later works, possibly written by individuals. The Psalter, a mixture of liturgy, national history and individual experience, which Barton describes as “a mess”, probably came together in about 300BC, although individual psalms may be much older than this.
The historical method of analysing layers of composition in the Bible even casts a faint shadow over the Ten Commandments. They are delivered on tablets of stone to an early itinerant nation. But since they include the commandment “Thou shalt not covet thy neighbour’s house ... nor his manservant, nor his maidservant, nor his ox, nor his ass”, they imply “a settled agrarian community”.
If the tablets of stone of the decalogue seem to crumble at the edges when the Bible is subjected to historical analysis, then Barton’s readers might wonder how religious faith can coexist with a Bible that is regarded as an internally contradictory text with a long history and diverse cultural origins.
Sceptics, indeed, might find in his magisterial overview of the history of the Bible clear evidence that orthodox religions are grounded in the beliefs of communities rather than in a single authoritative text that records the word of God.
Believers, on the other hand, might follow him in taking a flexible view of the Bible as a collection of texts that preserve reminiscences of the life of Jesus and about God and how to worship him. Barton says this history is “the story of the interplay between religion and the book – neither mapping exactly onto the other”. Problems arise when interpreters try to impose orthodox religious beliefs on its text: “The extreme diversity of the material in the Bible is not to be reduced by extracting essential principles, but embraced as a celebration of variety.”
That might sound like wishy-washy Anglicanism. But there is a lot of argumentative muscle in Barton’s book. He aims to “dispel the image of the Bible as a sacred monolith between two black leather covers”. So he has little time for fundamentalists and Biblical literalists who believe that its every word is sacred.
[ed. See also: GOD: A Biography - A Flawed Character (NY Times) and The Problem of Evil (Stanford Encyclopedia of Philosophy).]
A quiz question, which is also a trick question: how many references to the doctrine of the Trinity are there in the Bible? The answer: two, at a pinch. One of them was probably inserted into the text of the Gospel of John by a zealous scribe well after the gospel was written. This is known as “the Johannine comma” (where comma means “clause” or “phrase”). The other (in Matthew) was also probably a later addition by a pious scribe.
As John Barton shows in this massive and fascinating book, the Bible really did have a history. It grew and developed. As its disparate books were gradually integrated into the theological structures of the church, scribes would engage in what is called “the orthodox corruption of scripture”. So once the notion that God the Father, the Son and the Holy Spirit were all equal persons of the Trinity was established it became natural to seek confirmation of that doctrine in the Bible.
The Epistles of St Paul were probably written not long after the death of Christ, in the AD40s or 50s. St Paul appears to have been an “adoptionist” who held that Jesus was adopted as Son of God at the resurrection rather than a believer in the Trinity.
The gospels (which show knowledge of the fall of the Jerusalem Temple in AD70) were written at least two decades after Paul’s epistles. And the Gospel of John was possibly written as late as the second century. It presents a Jesus who talks a great deal about his own status as God’s son. This more likely reflects the beliefs of a later era than that of Jesus himself, and John’s gospel may indeed be a biography of Christ written to suit the interests and beliefs of John’s own particular branch of Christianity. The episode of the woman taken in adultery – “He that is without sin among you, let him first cast a stone at her” – which appears only in this gospel, is not found in the earliest manuscripts, and is likely to be an even later addition.
Does this mean that Barton’s history of the Bible provides an armoury of arguments for religious sceptics? Well, the sceptical will certainly find material here to deploy. But Barton – who is an Anglican with Lutheran leanings – believes that it’s perfectly possible to see the Bible as a book with its own history and also to regard it as a repository of religious truths.
He views the New Testament as a collection of records written by different people, probably for different religious communities, at different times. The gospels were preserved not in scrolls but in codices – bound volumes with separate leaves – and were “not fixed Scripture but simply the reminiscences of the Apostles”. That explains why they can be internally inconsistent, but also how they can be thought of as texts that give a range of different angles on the life of Christ, even if they don’t all relate (in that common phrase) the gospel truth.
Barton opposes Dan Brown-style conspiracy theorists who think that some time in the fourth century a powerful church suppressed a range of heterodox scriptures and created the New Testament as we now know it. He argues convincingly that by the second century there was a loose canon of holy books that were broadly similar to those included in the Bible today.
Although Barton is a Christian he’s also an excellent guide to the composition of what is usually called the “Old Testament” – though, as he reminds us, that name implies that the Hebrew Bible (as he prefers to call it) is no more than a precursor to the New Testament. Early Christian thinkers saw it this way. They regarded the life of Christ as the great truth towards which the Hebrew prophets and scriptures pointed, and which superseded the old faith and its laws. They read the Hebrew Bible as a story of disobedience and falling: Adam and Eve fell, and then Christ reversed the effects of that fall. That could go along with hostility to Jewish beliefs, and even antisemitism. For the majority of Jews, however, the Hebrew Bible was “not at all about fall and redemption, but about how to live a faithful life in the ups and downs of the ongoing history of the people of Israel”.
The Hebrew Bible itself developed over a long period, probably from about the eighth to the second century BC. Barton suggests that the Book of Proverbs may well have been produced by something like Israel’s civil service. Job and Ecclesiastes are much later works, possibly written by individuals. The Psalter, a mixture of liturgy, national history and individual experience, which Barton describes as “a mess”, probably came together in about 300BC, although individual psalms may be much older than this.
The historical method of analysing layers of composition in the Bible even casts a faint shadow over the Ten Commandments. They are delivered on tablets of stone to an early itinerant nation. But since they include the commandment “Thou shalt not covet thy neighbour’s house ... nor his manservant, nor his maidservant, nor his ox, nor his ass”, they imply “a settled agrarian community”.
If the tablets of stone of the decalogue seem to crumble at the edges when the Bible is subjected to historical analysis, then Barton’s readers might wonder how religious faith can coexist with a Bible that is regarded as an internally contradictory text with a long history and diverse cultural origins.
Sceptics, indeed, might find in his magisterial overview of the history of the Bible clear evidence that orthodox religions are grounded in the beliefs of communities rather than in a single authoritative text that records the word of God.
Believers, on the other hand, might follow him in taking a flexible view of the Bible as a collection of texts that preserve reminiscences of the life of Jesus and about God and how to worship him. Barton says this history is “the story of the interplay between religion and the book – neither mapping exactly onto the other”. Problems arise when interpreters try to impose orthodox religious beliefs on its text: “The extreme diversity of the material in the Bible is not to be reduced by extracting essential principles, but embraced as a celebration of variety.”
That might sound like wishy-washy Anglicanism. But there is a lot of argumentative muscle in Barton’s book. He aims to “dispel the image of the Bible as a sacred monolith between two black leather covers”. So he has little time for fundamentalists and Biblical literalists who believe that its every word is sacred.
by Colin Burrow, The Guardian | Read more:
Image: The Garden of Eden With the Fall of Man (1615) by Jan Brueghel and Rubens. Photograph: Alamy[ed. See also: GOD: A Biography - A Flawed Character (NY Times) and The Problem of Evil (Stanford Encyclopedia of Philosophy).]
Gone But Not Forgotten
Dealing With Hospital Closure, Pioneer Kansas Town Asks: What Comes Next?
A slight drizzle had begun in the gray December sky outside Community Christian Church as Reta Baker, president of the local hospital, stepped through the doors to join a weekly morning coffee organized by Fort Scott’s chamber of commerce.
The town manager was there, along with the franchisee of the local McDonald’s, an insurance agency owner and the receptionist from the big auto sales lot. Baker, who grew up on a farm south of town, knew them all.
Still, she paused in the doorway with her chin up to take in the scene. Then, lowering her voice, she admitted: “Nobody talked to me after the announcement.”
Just a few months before, Baker — joining with the hospital’s owner, St. Louis-based Mercy — announced the 132-year-old hospital would close. Baker carefully orchestrated face-to-face meetings with doctors, nurses, city leaders and staff members in the final days of September and on Oct. 1. Afterward, she sent written notices to the staff and local newspaper.
For the 7,800 people of Fort Scott, about 90 miles south of Kansas City, the hospital’s closure was a loss they never imagined possible, sparking anger and fear.
“Babies are going to be dying,” said longtime resident Darlene Doherty, who was at the coffee. “This is a disaster.”
Bourbon County Sheriff Bill Martin stopped before leaving the gathering to say the closure has “a dark side.” And Dusty Drake, the lead minister at Community Christian Church, diplomatically said people have “lots of questions,” adding that members of his congregation will lose their jobs.
Yet, even as this town deals with the trauma of losing a beloved institution, deeper national questions underlie the struggle: Do small communities like this one need a traditional hospital at all? And, if not, what health care do they need?
Sisters of Mercy nuns first opened Fort Scott’s 10-bed frontier hospital in 1886— a time when traveling 30 miles to see a doctor was unfathomable and when most medical treatments were so primitive they could be dispensed almost anywhere.
Now, driving the four-lane highway north to Kansas City or crossing the state line to Joplin, Mo., is a day trip that includes shopping and a stop at your favorite restaurant. The bigger hospitals there offer the latest sophisticated treatments and equipment.
And when patients here get sick, many simply go elsewhere. An average of nine patients stayed in Mercy Hospital Fort Scott’s more than 40 beds each day from July 2017 through June 2018. And these numbers are not uncommon: Forty-five Kansas hospitals report an average daily census of fewer than two patients.
James Cosgrove, who directed a recent U.S. Government Accountability Office study about rural hospital closures, said the nation needs a better understanding of what the closures mean to the health of people in rural America, where the burden of disease — from diabetes to cancer — is often greater than in urban areas.
What happens when a 70-year-old grandfather falls on ice and must choose between staying home and driving to the closest emergency department, 30 miles away? Where does the sheriff’s deputy who picks up an injured suspect take his charge for medical clearance before going to jail? And how does a young mother whose toddler fell against the coffee table and now has a gaping head wound cope?
There is also the economic question of how the hospital closure will affect the town’s demographic makeup since, as is often the case in rural America, Fort Scott’s hospital is a primary source of well-paying jobs and attracts professionals to the community.
As Fort Scott deals with the trauma of losing a beloved institution, deeper national questions underlie the struggle: Do small, rural communities need a traditional hospital at all? And if not, how will they get the health care they need? (...)
Mercy Hospital Fort Scott joined a growing list of more than 100 rural hospitals that have closed nationwide since 2010, according to data from the University of North Carolina’s Cecil G. Sheps Center for Health Services Research. How the town copes is a window into what comes next.
‘We Were Naive’
Over time, Mercy became so much a part of the community that parents expected to see the hospital’s ambulance standing guard at the high school’s Friday night football games.
Mercy’s name was seemingly everywhere, actively promoting population health initiatives by working with the school district to lower children’s obesity rates as well as local employers on diabetes prevention and healthy eating programs — worthy but, often, not revenue generators for the hospital.
“You cannot take for granted that your hospital is as committed to your community as you are,” said Fort Scott City Manager Dave Martin. “We were naive.”
Indeed, in 2002 when Mercy decided to build the then-69-bed hospital, residents raised $1 million out of their own pockets for construction. Another million was given by residents to the hospital’s foundation for upgrading and replacing the hospital’s equipment.
“Nobody donated to Mercy just for it to be Mercy’s,” said Bill Brittain, a former city and county commissioner. The point was to have a hospital for Fort Scott.
by Sarah Jane Tribble, Kaiser Health News | Read more:
Image: Christopher Smith
[ed. And, in other Health News, see also: Hidden FDA Reports Detail Harm Caused By Scores Of Medical Devices (KHN).]
A slight drizzle had begun in the gray December sky outside Community Christian Church as Reta Baker, president of the local hospital, stepped through the doors to join a weekly morning coffee organized by Fort Scott’s chamber of commerce.
The town manager was there, along with the franchisee of the local McDonald’s, an insurance agency owner and the receptionist from the big auto sales lot. Baker, who grew up on a farm south of town, knew them all.
Still, she paused in the doorway with her chin up to take in the scene. Then, lowering her voice, she admitted: “Nobody talked to me after the announcement.”
Just a few months before, Baker — joining with the hospital’s owner, St. Louis-based Mercy — announced the 132-year-old hospital would close. Baker carefully orchestrated face-to-face meetings with doctors, nurses, city leaders and staff members in the final days of September and on Oct. 1. Afterward, she sent written notices to the staff and local newspaper.
For the 7,800 people of Fort Scott, about 90 miles south of Kansas City, the hospital’s closure was a loss they never imagined possible, sparking anger and fear.
“Babies are going to be dying,” said longtime resident Darlene Doherty, who was at the coffee. “This is a disaster.”
Bourbon County Sheriff Bill Martin stopped before leaving the gathering to say the closure has “a dark side.” And Dusty Drake, the lead minister at Community Christian Church, diplomatically said people have “lots of questions,” adding that members of his congregation will lose their jobs.
Yet, even as this town deals with the trauma of losing a beloved institution, deeper national questions underlie the struggle: Do small communities like this one need a traditional hospital at all? And, if not, what health care do they need?
Sisters of Mercy nuns first opened Fort Scott’s 10-bed frontier hospital in 1886— a time when traveling 30 miles to see a doctor was unfathomable and when most medical treatments were so primitive they could be dispensed almost anywhere.
Now, driving the four-lane highway north to Kansas City or crossing the state line to Joplin, Mo., is a day trip that includes shopping and a stop at your favorite restaurant. The bigger hospitals there offer the latest sophisticated treatments and equipment.
And when patients here get sick, many simply go elsewhere. An average of nine patients stayed in Mercy Hospital Fort Scott’s more than 40 beds each day from July 2017 through June 2018. And these numbers are not uncommon: Forty-five Kansas hospitals report an average daily census of fewer than two patients.
James Cosgrove, who directed a recent U.S. Government Accountability Office study about rural hospital closures, said the nation needs a better understanding of what the closures mean to the health of people in rural America, where the burden of disease — from diabetes to cancer — is often greater than in urban areas.
What happens when a 70-year-old grandfather falls on ice and must choose between staying home and driving to the closest emergency department, 30 miles away? Where does the sheriff’s deputy who picks up an injured suspect take his charge for medical clearance before going to jail? And how does a young mother whose toddler fell against the coffee table and now has a gaping head wound cope?
There is also the economic question of how the hospital closure will affect the town’s demographic makeup since, as is often the case in rural America, Fort Scott’s hospital is a primary source of well-paying jobs and attracts professionals to the community.
As Fort Scott deals with the trauma of losing a beloved institution, deeper national questions underlie the struggle: Do small, rural communities need a traditional hospital at all? And if not, how will they get the health care they need? (...)
Mercy Hospital Fort Scott joined a growing list of more than 100 rural hospitals that have closed nationwide since 2010, according to data from the University of North Carolina’s Cecil G. Sheps Center for Health Services Research. How the town copes is a window into what comes next.
‘We Were Naive’
Over time, Mercy became so much a part of the community that parents expected to see the hospital’s ambulance standing guard at the high school’s Friday night football games.
Mercy’s name was seemingly everywhere, actively promoting population health initiatives by working with the school district to lower children’s obesity rates as well as local employers on diabetes prevention and healthy eating programs — worthy but, often, not revenue generators for the hospital.
“You cannot take for granted that your hospital is as committed to your community as you are,” said Fort Scott City Manager Dave Martin. “We were naive.”
Indeed, in 2002 when Mercy decided to build the then-69-bed hospital, residents raised $1 million out of their own pockets for construction. Another million was given by residents to the hospital’s foundation for upgrading and replacing the hospital’s equipment.
“Nobody donated to Mercy just for it to be Mercy’s,” said Bill Brittain, a former city and county commissioner. The point was to have a hospital for Fort Scott.
by Sarah Jane Tribble, Kaiser Health News | Read more:
Image: Christopher Smith
[ed. And, in other Health News, see also: Hidden FDA Reports Detail Harm Caused By Scores Of Medical Devices (KHN).]
Julia Le Duc/AP
via:
via:
Labels:
Government,
Photos,
Politics,
Relationships,
Security
Friday, June 28, 2019
Why Does Every Beer Look So Cool Now?
On a recent weekend afternoon, I found myself in my neighborhood grocery store contemplating a wall of beer. This section of the store is like a candy aisle, filled with rows of brightly colored cans and illustrated boxes that look like they were plucked from a design blog.
Like a sugar-addled child, my eyes darted from one label to the next while I sorted through what makes the hazy IPA with the colorful, abstract drawing on the label any different than the hazy IPA with the sans-serif logo. I came to the conclusion that it really didn’t matter, and grabbed the cheaper six pack.
This level of beer-aisle deliberation is a relatively new phenomenon. Choosing beer used to be easy. There were the old standbys—the Budweisers, the Millers, and the occasional import like Heineken—all with classic labels that made you want to use the word “brewsky.” Today, choosing a beer can require a full-on aesthetic assessment. Even Milton Glaser has something to say about it.
Small, independent brewing is booming, and it’s brought with it a renaissance in beer label design. To put it in perspective: Ten years ago, the United States had 1,650 registered craft breweries; today there are more than 7,300, and that number is only going to grow. This is good news for beer lovers, but bad news for indecisive drinkers who make decisions based on whatever looks cool. The problem we’re facing today, if you can really call it a problem, is that pretty much everything looks cool now.
Beer magazine Caña recently wrote that “beer cans are officially the new record sleeve,” and it’s right. While Big Beer is all about brand recognition and consistency, craft breweries have embraced a more experimental approach, distinguishing themselves with labels designed to catch the eye when you’re scanning the cooler.
“A large percentage of beer lovers walk into a store and don’t know what they’re going to buy,” says Julia Herz, the Craft Beer program director at the Brewers Association, the national organization for craft breweries. “The pressure at retail is to stand out and get noticed.” A good label is a calling card. It’s a chance for breweries to convince you to choose their beer and not the one next to it.
“Design is everything,” Herz adds. “Craft brewers don’t typically have Big Beer advertising budgets. Most of them are doing grassroots marketing, and nothing is more grassroots than your packaging.”
When craft beer was still a small operation, breweries would often design their label in-house or ask a friend or local artist to pull something together for a release. “It was really innocent,” says Oceania Eagan, founder of Blindtiger Design, a Seattle agency that specializes in designing identities for breweries. “That innocence meant people could just hodgepodge things together. You can’t get away with that anymore.” In the last five or so years, craft brewing has reached a level of maturity where breweries have decided it’s worth the time and money to hire a design studio who can help them “professionalize” their look. And accordingly, a crop of design studios like Blindtiger whose primary, if not exclusive, focus is on beer branding have sprung up to take advantage of the growing market.
Like a sugar-addled child, my eyes darted from one label to the next while I sorted through what makes the hazy IPA with the colorful, abstract drawing on the label any different than the hazy IPA with the sans-serif logo. I came to the conclusion that it really didn’t matter, and grabbed the cheaper six pack.
This level of beer-aisle deliberation is a relatively new phenomenon. Choosing beer used to be easy. There were the old standbys—the Budweisers, the Millers, and the occasional import like Heineken—all with classic labels that made you want to use the word “brewsky.” Today, choosing a beer can require a full-on aesthetic assessment. Even Milton Glaser has something to say about it.
Small, independent brewing is booming, and it’s brought with it a renaissance in beer label design. To put it in perspective: Ten years ago, the United States had 1,650 registered craft breweries; today there are more than 7,300, and that number is only going to grow. This is good news for beer lovers, but bad news for indecisive drinkers who make decisions based on whatever looks cool. The problem we’re facing today, if you can really call it a problem, is that pretty much everything looks cool now.
Beer magazine Caña recently wrote that “beer cans are officially the new record sleeve,” and it’s right. While Big Beer is all about brand recognition and consistency, craft breweries have embraced a more experimental approach, distinguishing themselves with labels designed to catch the eye when you’re scanning the cooler.
“A large percentage of beer lovers walk into a store and don’t know what they’re going to buy,” says Julia Herz, the Craft Beer program director at the Brewers Association, the national organization for craft breweries. “The pressure at retail is to stand out and get noticed.” A good label is a calling card. It’s a chance for breweries to convince you to choose their beer and not the one next to it.
“Design is everything,” Herz adds. “Craft brewers don’t typically have Big Beer advertising budgets. Most of them are doing grassroots marketing, and nothing is more grassroots than your packaging.”
When craft beer was still a small operation, breweries would often design their label in-house or ask a friend or local artist to pull something together for a release. “It was really innocent,” says Oceania Eagan, founder of Blindtiger Design, a Seattle agency that specializes in designing identities for breweries. “That innocence meant people could just hodgepodge things together. You can’t get away with that anymore.” In the last five or so years, craft brewing has reached a level of maturity where breweries have decided it’s worth the time and money to hire a design studio who can help them “professionalize” their look. And accordingly, a crop of design studios like Blindtiger whose primary, if not exclusive, focus is on beer branding have sprung up to take advantage of the growing market.
by Liz Stinson, Eye on Design | Read more:
Image: Katharina Brenner
How to find The One
‘I want a man who’s kind and understanding. Is that too much to ask of a millionaire?’
Zsa Zsa Gabor, actress and socialite (1917-2016)
Zsa Zsa Gabor, actress and socialite (1917-2016)
The search for ‘The One’ can indeed feel futile. You might test what can feel like endless candidates and not find anyone you really like. You can travel great distances but never reach the Promised Land. Even when this land seems to be found, there is no lifetime guarantee, and the expiration date of this happy kingdom might be brief. Breakups, not long-term relationships, appear to be the norm. In many societies, about half of all marriages end in divorce, and lots of the remaining half have at some point seriously considered it.
In light of these difficulties, doubts have been raised concerning the value of this kind of search. One person might dismiss the quest altogether. ‘Done with trying to find a woman for life. Much easier to just hook up for a good short time. Avoid all the other personal drama!’ as one man told me. Another stops the search early, after finding profound love and connection when very young. ‘I’ve never regretted not ordering the fish when my steak arrived cooked and seasoned to my liking,’ said a woman who married her first lover. Yet others say they’ve found The One yet continue sampling what’s out there. ‘I want both – a long, profound love and a series of short, intense romantic-sexual experiences. Lust and profound love are both meaningful and satisfying for me,’ another woman explains. (...)
Despite these kinds of caveats, when it comes of finding The One, strategy counts, starting with the very definition of ‘perfect’. One dictionary definition is flawless: being entirely without fault or defect. The other is most suitable: being as good as possible, and completely appropriate. While the first meaning focuses on eliminating the negative, the second centres on finding as much positive as one can.
Clearly, the search for the flawless person is an exercise in utter futility. Through this lens, the beloved is seen as a kind of icon, without relation to the partner. Here, one looks at qualities that stand on their own, such as intelligence, appearance, humour or wealth. This sort of measure has two advantages – it is easy to use, and most people would agree about the assessments. It’s an approach that takes a static view, in which romantic love is essentially fixed – and that’s something we know doesn’t work well in the real world.
On the other hand, looking for the most suitable person under a given set of circumstances might allow you to build an intimate connection, and could yield a flourishing partnership. This view emphasises the uniqueness of the relationship; it sees the beloved’s most important qualities in relationship to the partner, and offers a dynamic kind of romantic love over time. Such love involves intrinsic development that includes bringing out the best in each other. The suitability scale is much more complex, since it depends on personal and environmental factors about which we do not have full knowledge.
Ultimately, both scales count. So in seeking a true life partner, it’s worth considering the equation for yourself. Should you marry a smart person? Generally speaking, intelligence is considered good – but here is where things get more complicated. If there is a big gap between the IQ of the two partners, their suitability for each other will be low because, in this particular realm, the trait, though nonrelational, is significant to relationship success.
The same goes for wealth. On the nonrelational scale, a lot of money is often good, but a wealthy person might score low on fidelity (fat bank accounts open many romantic doors). Moreover, wealthy people tend to believe that they are more deserving, and hence their caring behaviour might be lower. In the same vein, having a good sexual appetite is usually good, but a large discrepancy between the partners’ sexual needs is not conducive to that crucial romantic connection. If, for instance, a man wants to have sex once or twice a week and a woman wishes to have sex multiple times a day, would they be suitable partners? Clearly not. And even if all these nonrelational factors match up, partners still won’t bring out the best in each other unless they truly connect.
For many people, the quest for the perfect person based on qualities such as beauty, intelligence and wealth (instead of the perfect partner, who offers connection and flourishing) is a major obstacle to finding The One. Since life is dynamic and people change their attitudes, priorities and wishes over time, achieving such romantic compatibility is not a onetime accomplishment, but an ongoing process of mutual interactions. In a crucial and perhaps little-understood switch, perfect compatibility is not necessarily a precondition for love; it is love and time that often create a couple’s compatibility.
Can a person cognisant of the two scales use this knowledge to aid the quest? There’s a calculus, it turns out. We all know the drill. You compile a checklist of the perfect partner’s desirable and undesirable traits, and tick off each trait that your prospective partner has. This search approach is pretty much how online dating works: it focuses on negative, superficial qualities, and tries to quickly filter out unsuitable candidates. Eliminating bad options is natural in an environment of abundant romantic options.
But the checklist practice is flawed because it typically lacks any intrinsic hierarchy weighting the different traits. For instance, it fails to put kindness ahead of humour, or intelligence before wealth. And it focuses on the other person’s qualities in isolation, scarcely giving any weight to the connection between the individuals; in short, it fails to consider the value of the other person as a suitable partner.
Benjamin Franklin, one of the US Founding Fathers, counselled his nephew to use knowledge to find a wife: one should proceed like a bookkeeper, he advised – list all the pros and cons, weigh up everything for two or three days, and then make a decision. But research from 1999 by the psychologists Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin and Daniel Goldstein, now at Microsoft in New York, shows that computer-based versions of Franklin’s rational bookkeeping manner – a program that weighed 18 different cues – proved less accurate than following the rule of thumb: ‘Get one good reason and ignore the rest of the information.’
Still, under clear-cut circumstances, the checklist can work. When the feeling is outright disinterest, you can eliminate the individual based on some unacceptable objective trait (such as an unpleasant laugh or dandruff) and a lack of simpatico. So the score for that person on both scales would be: superficial and negative. (...)
Then there’s the scenario in Graeme Simsion’s popular novel, The Rosie Project (2013). The protagonist, Don Tillman, is a university professor seeking a wife, and so he prepares a detailed list of what he’s looking for in a woman: someone intelligent but also a good cook, who is physically fit, as well as a teetotal nonsmoker. Don rules out many women until he meets Rosie, a bartender who smokes, drinks and otherwise fails on most of his criteria. Together, they search for Rosie’s biological father and, in the process, Don falls in love with Rosie. It is not her individual characteristics that generate his love. It is the harmony he feels while spending more and more time with her, which makes all the difference. There’s something ‘off’, but you like – okay, love – the person anyway. After all, we can learn to live with superficial flaws, but profound flaws and lack of intimacy pose a real danger to a long-term loving relationship. Score: negative but profound. This is a life worth considering – and better, by the odds, than the options above.
Finally, you might hit the jackpot. You like the outside package, and you bring out the best in each other, all at once. In 2002, the social psychologist Stephen Drigotas at Southern Methodist University in Dallas demonstrated that when a close romantic partner sees and acts toward you in a manner that matches your ideal self, you move closer to that self, an effect he calls ‘the Michelangelo Phenomenon’. Just as Michelangelo saw sculpting as his process of releasing the ideal forms hidden in the marble, our romantic partners ‘sculpt’ us. Close partners sculpt one another in a manner that brings each individual closer to his or her ideal self, thus bringing out the best in each other and making both feel good about themselves. In such relationships, we see personal growth and flourishing in statements such as: ‘I’m a better person when I am with her.’ Score: positive and profound.
For most of human history, marriage was a practical arrangement designed to enable the couple to meet their basic survival and social needs. Passionate love had precious little to do with it. The American historian Stephanie Coontz, the author of Marriage, a History (2006), shows that this ideal emerged only about 200 years ago. She observes that: ‘In many cultures, love has been seen as a desirable outcome of marriage, but not as a good reason for getting married in the first place.’ The French philosopher Pascal Bruckner, the author of Has Marriage for Love Failed? (2010), argues that in the past marriage was sacred, and love, if it existed at all, was a kind of bonus; now, love is sacred and marriage is secondary. Accordingly, the number of marriages has been declining, while divorces, cohabitation and single-parent families are increasing. It seems that, as he puts it, ‘love has triumphed over marriage but now it is destroying it from inside’.
In addition to the pragmatic and the loved-based marriage types, the psychologist Eli Finkel at Northwestern University in Illinois adds the personal fulfilment marriage – or, as his book puts it, The All-or-Nothing Marriage (2017) – which developed in the US around 1965. As the growing demands of marriage make it impossible to find a partner who excels in all important areas, Finkel presents this third type of marriage, which requires that we compromise and accept a partner who is in some important ways good enough, if not the very best. Rather than aim high with an ideal marriage, we should be satisfied with a less-than-perfect marriage that enables us to have a family and to thrive.
Yes, there’s an optimal prescription for finding The One, but that doesn’t abolish the possibility of never finding the romantic partner of your dreams. For your own flourishing, you might need to settle for less. The question is, how much ‘less’ can your partner be, and still be a sufficiently good partner?
by Aaron Ben-Ze’ev, Aeon | Read more:
Image: Per-Anders Pettersson/Getty
Achieving the Impossible
In the ever-growing category of plant-based meats, the Impossible Burger is known as “the one that bleeds.” When I ate my first Impossible Burger at a Bareburger in Brooklyn, I didn’t detect anything blood-like, but absent that, it felt as real as any burger I could remember eating. With a light char on the outside and topped with pickles and American cheese, it channeled the burgers of backyard cookouts in a way that veggie burgers just don’t, which makes sense because as Impossible Foods insists, the Impossible Burger isn’t a veggie burger: It is meat, made from plants.
Impossible is not the only plant-based meat brand making that “meat from plants” claim, though it takes the most subtle middle ground in its branding. Competitor Beyond Meat — who’s crushing it on the stock market after going public in May — peppers its online mission statement with IPO-friendly verbs: It “builds” and “creates” meat that it calls “the Future of Protein®”, a product that just happens to be, by its own estimation, “delicious” and “mouthwatering.” The 40-year-old veggie burger stalwart Boca Foods (now owned by Kraft) also employs the language expected with a food brand (its products, according to its website, are “packed with flavor” and meant to “satisfy junk food cravings”).
But it’s Impossible at the center of conversations among those who purport to be interested in food — vegans and omnivores alike. Like Beyond, it stressed its scientific advancements in its early days, and like both Beyond and Boca, it wants its customers to consider its product “mouthwatering” and otherwise discerningly similar to meat. But Impossible also employed a shrewd campaign that emphasized high-end gloss. It recruited celebrity investors like Jay-Z and Serena Williams and placed famous chefs and restaurants — not grocery stores, direct-to-consumer subscriptions, or university cafeterias — at the center of its strategy. Impossible became the faux-meat burger “worthy” of meat-loving chefs.
“[Chefs] are followed on Twitter and Instagram and Facebook,” says Rachel Konrad, Impossible Foods’s chief communications officer. “We have entire television channels dedicated to them. They are enormous influences, not only in foodie circles, but in broader lifestyle trend circles.” Impossible sought out chefs with widely recognizable names to give the brand cultural capital, thus making it the faux-meat burger that the conscientious, trend-seeking consumer had to try. And the chefs they most wanted to represent their product were those who had no problem whatsoever with cooking meat.
David Chang, one of the most recognizable names in popular food culture, was the first to serve the Impossible Burger at his New York City restaurant Nishi. “We’re always looking to support people who are making the best products in the best ways possible and to me, the Impossible Burger is one more example,” Chang said at the time. “First and foremost, we think this makes a delicious burger.” (...)
Impossible’s rollout followed the lead of other culty food companies: Blue Bottle was a boutique coffee roaster in San Francisco before it was a $700 million brand with more than 70 locations in the U.S. and Asia. Soylent was once the liquid meal replacement of choice for a certain kind of tech industry employee, available for purchase only online. Now, it’s sold at 7-Eleven, corner stores, and at Walmart. The strategy, says Konrad, means that the current Impossible customer tends to be “very literate, highly educated, fairly high-earning, and very disproportionately millennials.” It’s the people who are reading about the Impossible Burger on the Internet (there’s virtually no other way to find out about it), the exact group a startup brand (like Warby Parker or Tuft & Needle before) would want as its taste-making ambassadors. (...)
In early 2019, the company launched Impossible 2.0, a new formula that better mimicked the look and texture of ground beef. Impossible product now forms the sausage crumbles on top of a Little Caesars supreme pizza testing in select markets, it will soon form the filling in tacos and bowls at 730 Qdoba locations, and by the end of 2019, there will be an Impossible Whopper at Burger Kings across the country, eventually replacing the MorningStar veggie burger that’s been on Burger King menus since 2002.
Impossible is not the only plant-based meat brand making that “meat from plants” claim, though it takes the most subtle middle ground in its branding. Competitor Beyond Meat — who’s crushing it on the stock market after going public in May — peppers its online mission statement with IPO-friendly verbs: It “builds” and “creates” meat that it calls “the Future of Protein®”, a product that just happens to be, by its own estimation, “delicious” and “mouthwatering.” The 40-year-old veggie burger stalwart Boca Foods (now owned by Kraft) also employs the language expected with a food brand (its products, according to its website, are “packed with flavor” and meant to “satisfy junk food cravings”).
But it’s Impossible at the center of conversations among those who purport to be interested in food — vegans and omnivores alike. Like Beyond, it stressed its scientific advancements in its early days, and like both Beyond and Boca, it wants its customers to consider its product “mouthwatering” and otherwise discerningly similar to meat. But Impossible also employed a shrewd campaign that emphasized high-end gloss. It recruited celebrity investors like Jay-Z and Serena Williams and placed famous chefs and restaurants — not grocery stores, direct-to-consumer subscriptions, or university cafeterias — at the center of its strategy. Impossible became the faux-meat burger “worthy” of meat-loving chefs.
“[Chefs] are followed on Twitter and Instagram and Facebook,” says Rachel Konrad, Impossible Foods’s chief communications officer. “We have entire television channels dedicated to them. They are enormous influences, not only in foodie circles, but in broader lifestyle trend circles.” Impossible sought out chefs with widely recognizable names to give the brand cultural capital, thus making it the faux-meat burger that the conscientious, trend-seeking consumer had to try. And the chefs they most wanted to represent their product were those who had no problem whatsoever with cooking meat.
David Chang, one of the most recognizable names in popular food culture, was the first to serve the Impossible Burger at his New York City restaurant Nishi. “We’re always looking to support people who are making the best products in the best ways possible and to me, the Impossible Burger is one more example,” Chang said at the time. “First and foremost, we think this makes a delicious burger.” (...)
Impossible’s rollout followed the lead of other culty food companies: Blue Bottle was a boutique coffee roaster in San Francisco before it was a $700 million brand with more than 70 locations in the U.S. and Asia. Soylent was once the liquid meal replacement of choice for a certain kind of tech industry employee, available for purchase only online. Now, it’s sold at 7-Eleven, corner stores, and at Walmart. The strategy, says Konrad, means that the current Impossible customer tends to be “very literate, highly educated, fairly high-earning, and very disproportionately millennials.” It’s the people who are reading about the Impossible Burger on the Internet (there’s virtually no other way to find out about it), the exact group a startup brand (like Warby Parker or Tuft & Needle before) would want as its taste-making ambassadors. (...)
In early 2019, the company launched Impossible 2.0, a new formula that better mimicked the look and texture of ground beef. Impossible product now forms the sausage crumbles on top of a Little Caesars supreme pizza testing in select markets, it will soon form the filling in tacos and bowls at 730 Qdoba locations, and by the end of 2019, there will be an Impossible Whopper at Burger Kings across the country, eventually replacing the MorningStar veggie burger that’s been on Burger King menus since 2002.
by Monica Burton, Eater | Read more:
Image: Andrea D’AquinoThursday, June 27, 2019
The Apocalyptic Cult of Cancel Culture
Jordanian-American Natasha Tynes is an award-winning author who faced government prosecution in Egypt for her work defending free speech and a free press. In May, Tynes saw a Washington, D.C. Metropolitan Area Transit Authority (WMATA) worker eating on the train—something she understood to be prohibited for all riders. She asked the employee about it. The woman responded, “Worry about yourself.” Frustrated that she rides the Metro hungry in order to comply with the rules while someone she understood to have the power to ticket her for eating was not complying with the rules, herself, Tynes “tweet-shamed” the employee by writing a complaint to WMATA and posting it on Twitter along with a photo of the woman eating.
Horrible behavior? I’d say so. Disappointing to see someone behave that way? Sure. But in a world in which online shaming is the new norm, it’s not surprising. What is surprising is that less than 45 minutes after posting the tweet, Tynes deleted it, apologized for what she called a “short-lived expression of frustration,” and contacted WMATA to say it was an “error in judgment” for her to report the employee. She even asked that WMATA not discipline the employee.
But Twitter’s outrage machine turned on her. She became “Metro Molly.” The independent publishing company set to distribute her novel tweeted that Tynes had done “something truly horrible” and they had “no desire to be involved with anyone who thinks it’s acceptable to jeopardize a person’s safety and employment in this way.” Her publishing deal was canceled.
Does this seem like a high price to pay for a 45-minute lapse in judgment? Or even for acting like, well, a jerk?
Enter Kyle Kashuv, the conservative Parkland school shooting survivor who declined lucrative scholarship offers in order to attend Harvard University only to have his admission rescinded after schoolmates alerted the Huffington Post to some extremely offensive, racist, and antisemitic comments Kashuv (who is Jewish) made in a private online chat when he was 16 years old. Kashuv, now 18, apologized publicly and unequivocally, and acknowledged his misdeeds in a letter to Harvard’s admissions office. He even sent a separate a letter to Harvard’s diversity dean. As David French remarked in the National Review, Kahsuv did “everything we want a young man to do when he’s done something wrong.”
One might think that Harvard would relish the opportunity to educate a young man who seems to have an interest in being a decent and productive member of society but appears not to have had the benefit of growing up in an environment in which uttering racial slurs is unthinkable. What could be better for him than spending four years in a community in which the thinking that produces that kind of behavior is replaced with better thinking (producing better behavior)?
Imagine the success story Harvard could have told: Teen with racist past graduates from Harvard with a commitment to social justice. But as Robby Soave of Reason Magazine noted, instead, a “corrosive impulse to seek and destroy” resulted in Harvard’s decision, seemingly “an endorsement of the position that people should be shamed and punished for their worst mistakes as kids.” On the other hand, former university president Michael Nietzel thinks Harvard was right to rescind the admission. “The idea that Mr. Kashuv should not be held accountable for his behavior because he was only 16 just doesn’t cut it. … Harvard was reasonable to say that his choice had consequences.”
Zack Beauchamp of Vox thinks the political left and right don’t see eye to eye on this incident because the view from the right is “sympathetic” while the view from the left is “critical.” What he sees as the “conservative view of racism” approaches racism as a “personal failing.” According to this view, he says, people can overcome their racism by “striving not to let race affect the way (they) speak and act,” and “the real threat isn’t the racist comments themselves,” because they can be overcome, “but the impulse to punish people for them.” From this "sympathetic" perspective, penalizing everyone for their past transgressions leaves them no room to grow, and even opens up the possibility of punishing the innocent.
While the “conservative” view focuses on individual growth and development, what Beauchamp defines as the “liberal and leftist” view sees racism as “a structural problem”—less of a personal failing to be overcome and more “unshakable,” leading “even people who firmly believe in ideals of equal treatment to act or speak in prejudiced ways.” According to this view, he says, “Kashuv looks less like a kid who made youthful mistakes and more like a young man who’s trying to escape responsibility for his actions.”
by Pamela B. Paresky Ph.D., Psychology Today | Read more:
Horrible behavior? I’d say so. Disappointing to see someone behave that way? Sure. But in a world in which online shaming is the new norm, it’s not surprising. What is surprising is that less than 45 minutes after posting the tweet, Tynes deleted it, apologized for what she called a “short-lived expression of frustration,” and contacted WMATA to say it was an “error in judgment” for her to report the employee. She even asked that WMATA not discipline the employee.
But Twitter’s outrage machine turned on her. She became “Metro Molly.” The independent publishing company set to distribute her novel tweeted that Tynes had done “something truly horrible” and they had “no desire to be involved with anyone who thinks it’s acceptable to jeopardize a person’s safety and employment in this way.” Her publishing deal was canceled.
Does this seem like a high price to pay for a 45-minute lapse in judgment? Or even for acting like, well, a jerk?
Enter Kyle Kashuv, the conservative Parkland school shooting survivor who declined lucrative scholarship offers in order to attend Harvard University only to have his admission rescinded after schoolmates alerted the Huffington Post to some extremely offensive, racist, and antisemitic comments Kashuv (who is Jewish) made in a private online chat when he was 16 years old. Kashuv, now 18, apologized publicly and unequivocally, and acknowledged his misdeeds in a letter to Harvard’s admissions office. He even sent a separate a letter to Harvard’s diversity dean. As David French remarked in the National Review, Kahsuv did “everything we want a young man to do when he’s done something wrong.”
One might think that Harvard would relish the opportunity to educate a young man who seems to have an interest in being a decent and productive member of society but appears not to have had the benefit of growing up in an environment in which uttering racial slurs is unthinkable. What could be better for him than spending four years in a community in which the thinking that produces that kind of behavior is replaced with better thinking (producing better behavior)?
Imagine the success story Harvard could have told: Teen with racist past graduates from Harvard with a commitment to social justice. But as Robby Soave of Reason Magazine noted, instead, a “corrosive impulse to seek and destroy” resulted in Harvard’s decision, seemingly “an endorsement of the position that people should be shamed and punished for their worst mistakes as kids.” On the other hand, former university president Michael Nietzel thinks Harvard was right to rescind the admission. “The idea that Mr. Kashuv should not be held accountable for his behavior because he was only 16 just doesn’t cut it. … Harvard was reasonable to say that his choice had consequences.”
Zack Beauchamp of Vox thinks the political left and right don’t see eye to eye on this incident because the view from the right is “sympathetic” while the view from the left is “critical.” What he sees as the “conservative view of racism” approaches racism as a “personal failing.” According to this view, he says, people can overcome their racism by “striving not to let race affect the way (they) speak and act,” and “the real threat isn’t the racist comments themselves,” because they can be overcome, “but the impulse to punish people for them.” From this "sympathetic" perspective, penalizing everyone for their past transgressions leaves them no room to grow, and even opens up the possibility of punishing the innocent.
While the “conservative” view focuses on individual growth and development, what Beauchamp defines as the “liberal and leftist” view sees racism as “a structural problem”—less of a personal failing to be overcome and more “unshakable,” leading “even people who firmly believe in ideals of equal treatment to act or speak in prejudiced ways.” According to this view, he says, “Kashuv looks less like a kid who made youthful mistakes and more like a young man who’s trying to escape responsibility for his actions.”
by Pamela B. Paresky Ph.D., Psychology Today | Read more:
Image: Muns/Wikimedia Commons
[ed. See also: Melvil Dewey's name stripped from top librarian award (The Guardian), The Dark Forest Theory of the Internet (One Zero), and On John Wayne, Cancel Culture, and the Art of Problematic Artists (LitHub).]
[ed. See also: Melvil Dewey's name stripped from top librarian award (The Guardian), The Dark Forest Theory of the Internet (One Zero), and On John Wayne, Cancel Culture, and the Art of Problematic Artists (LitHub).]
Streaming TV is About to Get Very Expensive
The most watched show on US Netflix, by a huge margin, is the US version of The Office. Even though the platform pumps out an absurd amount of original programming – 1,500 hours last year – it turns out that everyone just wants to watch a decade-old sitcom. One report last year said that The Office accounts for 7% all US Netflix viewing.
So, naturally, NBC wants it back. This week, it was announced that Netflix had failed to secure the rights to The Office beyond January 2021. The good news is that it will still be available to watch elsewhere. The bad news is that “elsewhere”, means “the new NBCUniversal streaming platform”.
As a viewer, you are right to feel queasy. The industry-disrupting success of Netflix means that everybody wants a slice of the pie. Right now, things are just about manageable – if you have a TV licence, a Netflix subscription, an Amazon subscription and a Now TV subscription, you are pretty much covered – but things are about to take a turn for the worse.
In November, Disney will launch Disney+, a streaming platform that will not only block off an enormous amount of existing content (Disney films, ABC shows, Marvel and Pixar films, Lucasfilm, The Simpsons and everything else made by 20th Century Fox), but will also offer a range of new scripted Marvel shows that will directly inform the narrative of the Marvel Cinematic Universe. Essentially, if you want to understand anything that happens in any Marvel film from this point onwards, you’ll need to splash out on a Disney+ subscription.
Apple will also be entering the streaming market at about the same time, promising new work from Sofia Coppola, Jennifer Aniston, Oprah Winfrey, Reese Witherspoon, Brie Larson, Damien Chazelle and Steven Spielberg. In the next three years, Apple will spend $4.2bn on original programming, and you won’t get to see any of it if you don’t pay a monthly premium.
There are so many others. NBCUniversal is pulling its shows from Netflix for its own platform. Before long, Friends is likely to disappear behind a new WarnerMedia streaming service – along with Lord of the Rings films, the Harry Potter films, anything based on a DC comic and everything on HBO – that it is believed will cost about £15 a month. In the UK, the BBC and ITV will amalgamate their archives behind a service called BritBox. The former Disney chairman Jeffrey Katzenberg is about to launch a platform called Quibi, releasing “snackable” content from Steven Spielberg and others that is designed to be watched on your phone. YouTube is producing more and more original subscription-only content. Facebook is making shows, for crying out loud.
And this sucks. Watching television is about to get very, very expensive. There will be a point where viewers are going to hit their tolerance for monthly subscriptions – I may be able to manage one more service, but only if I unsubscribe from an existing platform – meaning that TV will become more elitist, tiered and fragmented than it already is. There’s a huge difference between not being able to watch everything because there’s too much choice and not being able to watch everything because you don’t have enough money.
Most importantly, we should all remember that this content war is hinged upon a fundamental misunderstanding of viewing habits. Netflix didn’t become a monster because people wanted to watch a specific show; it became a monster because people wanted to watch everything.
So, naturally, NBC wants it back. This week, it was announced that Netflix had failed to secure the rights to The Office beyond January 2021. The good news is that it will still be available to watch elsewhere. The bad news is that “elsewhere”, means “the new NBCUniversal streaming platform”.
As a viewer, you are right to feel queasy. The industry-disrupting success of Netflix means that everybody wants a slice of the pie. Right now, things are just about manageable – if you have a TV licence, a Netflix subscription, an Amazon subscription and a Now TV subscription, you are pretty much covered – but things are about to take a turn for the worse.
In November, Disney will launch Disney+, a streaming platform that will not only block off an enormous amount of existing content (Disney films, ABC shows, Marvel and Pixar films, Lucasfilm, The Simpsons and everything else made by 20th Century Fox), but will also offer a range of new scripted Marvel shows that will directly inform the narrative of the Marvel Cinematic Universe. Essentially, if you want to understand anything that happens in any Marvel film from this point onwards, you’ll need to splash out on a Disney+ subscription.
Apple will also be entering the streaming market at about the same time, promising new work from Sofia Coppola, Jennifer Aniston, Oprah Winfrey, Reese Witherspoon, Brie Larson, Damien Chazelle and Steven Spielberg. In the next three years, Apple will spend $4.2bn on original programming, and you won’t get to see any of it if you don’t pay a monthly premium.
There are so many others. NBCUniversal is pulling its shows from Netflix for its own platform. Before long, Friends is likely to disappear behind a new WarnerMedia streaming service – along with Lord of the Rings films, the Harry Potter films, anything based on a DC comic and everything on HBO – that it is believed will cost about £15 a month. In the UK, the BBC and ITV will amalgamate their archives behind a service called BritBox. The former Disney chairman Jeffrey Katzenberg is about to launch a platform called Quibi, releasing “snackable” content from Steven Spielberg and others that is designed to be watched on your phone. YouTube is producing more and more original subscription-only content. Facebook is making shows, for crying out loud.
And this sucks. Watching television is about to get very, very expensive. There will be a point where viewers are going to hit their tolerance for monthly subscriptions – I may be able to manage one more service, but only if I unsubscribe from an existing platform – meaning that TV will become more elitist, tiered and fragmented than it already is. There’s a huge difference between not being able to watch everything because there’s too much choice and not being able to watch everything because you don’t have enough money.
Most importantly, we should all remember that this content war is hinged upon a fundamental misunderstanding of viewing habits. Netflix didn’t become a monster because people wanted to watch a specific show; it became a monster because people wanted to watch everything.
by Stuart Heritage, The Guardian | Read more:
Image: NBC/Fox
Wednesday, June 26, 2019
Skin Cancer is on the Rise, and Not Just for Golfers
Skin cancer is the commonest type of cancer: There are more new cases each year than there are of all other cancers combined. The principal cause is exposure to ultraviolet radiation from the sun, with the usual contributions from genetic bad luck. Basal cell carcinoma is the most widespread and least-frightening variety. It almost never metastasizes, and, if the tumor is superficial and small, it can sometimes even be obliterated non-surgically, with repeated applications of a topical cream or with a particular kind of light therapy. Next in severity is squamous cell carcinoma, the treatment for which is trickier but usually also straightforward unless the cancer has spread. The worst kind—and, fortunately, a relatively uncommon one, although its incidence is increasing—is melanoma. If melanoma isn't caught early, it can metastasize rapidly to distant parts of the body, and once that happens it's often fatal. Invasive melanoma accounts for a tiny percentage of all skin-cancer cases but for the majority of skin-cancer deaths.
Golfers have always been at greater risk of developing skin cancer than people who never go outside or visit tanning parlors, but even among nongolfers the incidence has been rising for years, worldwide. Studies cited by the Skin Cancer Foundation have shown that, in the United States, cases of nonmelanoma skin cancer increased by 77 percent from 1994 to 2014, and that there will be 7.7 percent more melanoma cases this year than there were in 2018. (Whales are also affected. They're exposed to the sun when they surface, and the skin damage they suffer appears similar to the skin damage suffered by humans.) The main cause for the increases is the depletion of the earth's ozone layer, which is a part of the stratosphere that begins about nine miles up and absorbs ultraviolet radiation that would otherwise broil us. It's like sunscreen for the entire planet. (...)
An excellent place to study the long-term effects of sunlight on human skin is the PGA Tour Champions. If you look closely at Andy North's face during one of his appearances as a commentator on ESPN, for example, you'll notice that his left and right nostrils are different sizes. The reason is that in 1991—after his wife had pointed out that his nose looked odd—he had surgery to remove a large basal cell carcinoma that extended into his left cheek, followed by plastic surgery to repair the quarter-size hole that the excision had created. The USGA persuaded him to write about his experience for Golf Journal, and his article had a big impact on players at all levels. Since that time, he has been an active and effective advocate for skin-cancer prevention and treatment.
North and many other seniors and super-seniors grew up, as I did, in an era when sunburn was viewed as no big deal. In those days, if you applied anything to your skin before going outside, it was almost always in the hope of increasing sun damage, not preventing it. (Sun-darkened skin blocks some UV rays—it's the body's attempt at producing its own sunscreen—but the darkening itself is an indicator of damage. “To be clear,” a dermatologist told me, “there is no such thing as a healthy tan.”) My friends and I used to compete, at the swimming pool, to see who could peel the largest intact sheet of skin from his stomach. When I was in college, I fell asleep on a beach in Mexico and burned my back so badly that I had to lean all the way forward in the passenger seat of a friend's Volkswagen Beetle during our 20-plus-hour drive back to school. The peeling skin hardened into curls the size, shape and approximate color of Fritos: My back looked as though a woodcarver had worked it over with a chisel. A professor of mine removed the curls by (agonizingly) rubbing me down with cold cream—a service that college professors no longer provide to students, I believe. (...)
Golfers have always been at greater risk of developing skin cancer than people who never go outside or visit tanning parlors, but even among nongolfers the incidence has been rising for years, worldwide. Studies cited by the Skin Cancer Foundation have shown that, in the United States, cases of nonmelanoma skin cancer increased by 77 percent from 1994 to 2014, and that there will be 7.7 percent more melanoma cases this year than there were in 2018. (Whales are also affected. They're exposed to the sun when they surface, and the skin damage they suffer appears similar to the skin damage suffered by humans.) The main cause for the increases is the depletion of the earth's ozone layer, which is a part of the stratosphere that begins about nine miles up and absorbs ultraviolet radiation that would otherwise broil us. It's like sunscreen for the entire planet. (...)
An excellent place to study the long-term effects of sunlight on human skin is the PGA Tour Champions. If you look closely at Andy North's face during one of his appearances as a commentator on ESPN, for example, you'll notice that his left and right nostrils are different sizes. The reason is that in 1991—after his wife had pointed out that his nose looked odd—he had surgery to remove a large basal cell carcinoma that extended into his left cheek, followed by plastic surgery to repair the quarter-size hole that the excision had created. The USGA persuaded him to write about his experience for Golf Journal, and his article had a big impact on players at all levels. Since that time, he has been an active and effective advocate for skin-cancer prevention and treatment.
North and many other seniors and super-seniors grew up, as I did, in an era when sunburn was viewed as no big deal. In those days, if you applied anything to your skin before going outside, it was almost always in the hope of increasing sun damage, not preventing it. (Sun-darkened skin blocks some UV rays—it's the body's attempt at producing its own sunscreen—but the darkening itself is an indicator of damage. “To be clear,” a dermatologist told me, “there is no such thing as a healthy tan.”) My friends and I used to compete, at the swimming pool, to see who could peel the largest intact sheet of skin from his stomach. When I was in college, I fell asleep on a beach in Mexico and burned my back so badly that I had to lean all the way forward in the passenger seat of a friend's Volkswagen Beetle during our 20-plus-hour drive back to school. The peeling skin hardened into curls the size, shape and approximate color of Fritos: My back looked as though a woodcarver had worked it over with a chisel. A professor of mine removed the curls by (agonizingly) rubbing me down with cold cream—a service that college professors no longer provide to students, I believe. (...)
Stewart Cink had a basal cell carcinoma removed from the side of his nose in 2018. Two years earlier, Cink's wife, Lisa, had begun treatment for advanced breast cancer, and some sportswriters (though not Cink) reacted as though their health problems were roughly equivalent: two cancer cases in one couple! But basal cell carcinoma, by comparison with Stage Four breast cancer, is more like a skinned knee than a medical emergency. People don't die from it, except in the rarest of circumstances, and the treatment doesn't overturn lives, families and careers.
Melanoma, by contrast, truly is scary. Ellen Flynn—a member of my golf club and an occasional mixed-event partner of mine—has had three melanomas, beginning about 15 years ago. “The first was on the back of my calf, and that wasn't so terrible,” she told me recently. “Then, four or five years later, I suddenly saw this major mole on my shoulder.” She'd been having regular checkups with a melanoma specialist, but she couldn't get an appointment right away. “I didn't want to be neurotic, but the mole had come from out of nowhere,” she continued. “So I pursued it, and as soon as the doctor saw it I could tell that it wasn't a good thing.”
The surgeon to whom Flynn's specialist sent her shocked her by telling her that he couldn't guarantee that, after the operation, she'd still have the use of her right arm. (“I'm, like, seriously?”) The visible part of a melanoma can be a minor element of a large and rapidly expanding cancer network, and surgeons sometimes have to cut out huge amounts of tissue. Flynn's tumor, fortunately, turned out to be far less extensive than the surgeon had feared: her golf swing survived. Then, a few years ago, she found a third melanoma, on her shin of her other leg. This one—whew again!—was also neither life- nor golf-threatening. “Plus a thousand other skin cancers, on my face mostly,” she said. “So without makeup I look like a hockey player.” (...)
Ellen Flynn was in her late 50s when she found her first melanoma. That makes her statistically typical—although the statistics are changing. The incidence of melanoma has risen during the past 85 years, from a lifetime risk of roughly 1 in 1,500 in 1935 for people with white skin to something more like 1 in 40 today. (The darker the skin, the lower the risk of skin cancer, although even for people with very dark skin the risk is not zero, and there are melanoma types that are unrelated to sun exposure and appear at similar frequencies across all racial groups.) Diagnoses among people much younger than Flynn have also increased. Melanoma is now the most common skin cancer among people 15 to 19, the most common cancer of any kind among people in their 20s, and the leading cause of cancer death among women 25 to 30. I realized recently that I know shockingly many people who have had melanomas, including two people who were in their 20s. One of those is Tyler Fairbairn, another occasional golf partner of mine (and a former playmate of my children), who's now in his mid-30s. “When I was in graduate school, I noticed that I had a kind of dark, raised thing, like the size of a pencil eraser, on my lower back,” he told me. “The surgeon who operated on it made about a two-inch incision and cut out a bunch all around it.”
Flynn's and Fairbairn's melanomas had not penetrated far into their skin, and for such cases the cure rate, through surgery alone, has always been high. The truly dangerous melanomas are the relatively few that have metastasized. (Skip Nottberg, a high-school classmate of Tom Watson's and an acquaintance of mine, died of one of those in 1997, when he was 47.) Hensin Tsao, who is the clinical director of the Melanoma & Pigmented Lesion Center at Massachusetts General Hospital, told me, “The thicker the tumor, and the bigger the tumor, the more likely it is to reach a blood vessel in the skin, crawl into it, and take off into an internal organ.” Tsao said that as recently as 10 years ago there was very little that could be done for patients whose melanomas has spread to the brain, the liver, the lungs or other body parts, but that several recently developed drugs have turned out to be extremely effective for many patients—so much so that doctors have begun to speak of cures in cases that once would have been considered hopeless.
Among the many challenges with melanoma is that, although 90 percent of cases are related to solar exposure, some types can appear on parts of the body that have seldom, if ever, been exposed to the sun: between two toes, within the folds of the bellybutton, inside the esophagus, on the anus. In Palm Desert a year and a half ago, I played golf with a retired CEO who was undergoing treatment for a melanoma on the tip of a big toe. He said that the cancer had spread to his lymph nodes, an ominous sign, and that the main reason it hadn't been diagnosed earlier was that its odd location and unusual appearance had fooled his doctor into thinking it was something else. A number of years ago, a nephew of a friend of a friend of mine was told by his ophthalmologist, during a routine eye exam, that he needed to see an oncologist right away. He did so, and learned that what the ophthalmologist had noticed, inside his eyeball, was an ocular melanoma. Six weeks later, he was dead.
Melanoma, by contrast, truly is scary. Ellen Flynn—a member of my golf club and an occasional mixed-event partner of mine—has had three melanomas, beginning about 15 years ago. “The first was on the back of my calf, and that wasn't so terrible,” she told me recently. “Then, four or five years later, I suddenly saw this major mole on my shoulder.” She'd been having regular checkups with a melanoma specialist, but she couldn't get an appointment right away. “I didn't want to be neurotic, but the mole had come from out of nowhere,” she continued. “So I pursued it, and as soon as the doctor saw it I could tell that it wasn't a good thing.”
The surgeon to whom Flynn's specialist sent her shocked her by telling her that he couldn't guarantee that, after the operation, she'd still have the use of her right arm. (“I'm, like, seriously?”) The visible part of a melanoma can be a minor element of a large and rapidly expanding cancer network, and surgeons sometimes have to cut out huge amounts of tissue. Flynn's tumor, fortunately, turned out to be far less extensive than the surgeon had feared: her golf swing survived. Then, a few years ago, she found a third melanoma, on her shin of her other leg. This one—whew again!—was also neither life- nor golf-threatening. “Plus a thousand other skin cancers, on my face mostly,” she said. “So without makeup I look like a hockey player.” (...)
Ellen Flynn was in her late 50s when she found her first melanoma. That makes her statistically typical—although the statistics are changing. The incidence of melanoma has risen during the past 85 years, from a lifetime risk of roughly 1 in 1,500 in 1935 for people with white skin to something more like 1 in 40 today. (The darker the skin, the lower the risk of skin cancer, although even for people with very dark skin the risk is not zero, and there are melanoma types that are unrelated to sun exposure and appear at similar frequencies across all racial groups.) Diagnoses among people much younger than Flynn have also increased. Melanoma is now the most common skin cancer among people 15 to 19, the most common cancer of any kind among people in their 20s, and the leading cause of cancer death among women 25 to 30. I realized recently that I know shockingly many people who have had melanomas, including two people who were in their 20s. One of those is Tyler Fairbairn, another occasional golf partner of mine (and a former playmate of my children), who's now in his mid-30s. “When I was in graduate school, I noticed that I had a kind of dark, raised thing, like the size of a pencil eraser, on my lower back,” he told me. “The surgeon who operated on it made about a two-inch incision and cut out a bunch all around it.”
Flynn's and Fairbairn's melanomas had not penetrated far into their skin, and for such cases the cure rate, through surgery alone, has always been high. The truly dangerous melanomas are the relatively few that have metastasized. (Skip Nottberg, a high-school classmate of Tom Watson's and an acquaintance of mine, died of one of those in 1997, when he was 47.) Hensin Tsao, who is the clinical director of the Melanoma & Pigmented Lesion Center at Massachusetts General Hospital, told me, “The thicker the tumor, and the bigger the tumor, the more likely it is to reach a blood vessel in the skin, crawl into it, and take off into an internal organ.” Tsao said that as recently as 10 years ago there was very little that could be done for patients whose melanomas has spread to the brain, the liver, the lungs or other body parts, but that several recently developed drugs have turned out to be extremely effective for many patients—so much so that doctors have begun to speak of cures in cases that once would have been considered hopeless.
Among the many challenges with melanoma is that, although 90 percent of cases are related to solar exposure, some types can appear on parts of the body that have seldom, if ever, been exposed to the sun: between two toes, within the folds of the bellybutton, inside the esophagus, on the anus. In Palm Desert a year and a half ago, I played golf with a retired CEO who was undergoing treatment for a melanoma on the tip of a big toe. He said that the cancer had spread to his lymph nodes, an ominous sign, and that the main reason it hadn't been diagnosed earlier was that its odd location and unusual appearance had fooled his doctor into thinking it was something else. A number of years ago, a nephew of a friend of a friend of mine was told by his ophthalmologist, during a routine eye exam, that he needed to see an oncologist right away. He did so, and learned that what the ophthalmologist had noticed, inside his eyeball, was an ocular melanoma. Six weeks later, he was dead.
by David Owen, Golf Digest | Read more:
Image: C.J. BurtonFacebook’s Faux Cryptocurrency
Ideological purity is a common affliction these days. It’s also one the cryptocurrency community is particularly prone to. Witness the upset over this week’s announcement from Facebook that it is to be the prime mover behind a new cryptocurrency, Libra.
Everyone instantly hated the idea. Real cryptos are about privacy and freedom. They are decentralised and permissionless – no one runs them, no one can be prevented from using them and the system never needs reference to a central authority.
Libra is to be none of these wonderful things. It is to be run by an actual organisation – the Swiss based Libra Association, made up of Facebook and 27 partners. It is centralised and permissioned – and its value will depend not on anything intrinsic to it but on the value of a basket of currencies, something that makes it seem more like an exchange-traded fund than a currency in its own right. Worst of all, the Libra Association is planning to make money from Libra, too.
Facebook has an obvious interest in bringing the world’s financial transactions in-house. But there’s another element: the interest from the deposits and government bonds backing Libra will not go to the people holding it. It will be used to pay for the system’s operating costs and, once those are covered, to the founding members as dividends. Add it all up and, to anyone of a puritanical crypto bent, this is clearly not quite right.
Libra could be a sovereignty game-changer
There is lots of detail still to come on how Libra will work. While we wait for some of that, it is true that there are things to worry about. One is privacy. Christina Frankopan, special projects lead at colony.io and a senior adviser to Lazard, questions the use of the metadata Libra will throw up, given that Facebook may be able to “triangulate this with other data sets to give them unprecedented knowledge of consumer behaviour and spending”.
If you are worried about the way financial apps might use data on your spending patterns, you should be really worried about how a vast social network morphing into a financial network might use it.
Anyone with your social media data can guess what you might buy. Anyone with your financial data knows already. Longer term there are the huge issues of what happens if Libra were to become genuinely successful. How does that affect national sovereignty?
Bitcoin has caused endless angst among central bankers but it hasn’t much bothered governments. That’s partly because, so far, it has been marginal stuff. But also because it hasn’t really acted as a currency. It isn’t a particularly effective or scalable means of payment, since almost no one actually uses it. It is a hopeless store of value – huge unpredictable swings don’t work for most buyers or sellers.
And it has not become a value reference in itself. If you have bitcoin you think about their value not in bitcoin but in dollars. Libra could be entirely different, particularly in the last sense. If it really is based on a basket of currencies and is stable as a result it might not take long at all for us to refer to the value of things in Libras. A Libra could just be a Libra. That is a sovereignty game changer.
Libra could work precisely because it isn’t a cryptocurrency
But let’s put all this to one side. Stop thinking about Libra as if you were a cryptocurrency expert. Start thinking about it as a consumer and you can see why it might work.
We haven’t adopted bitcoin or any other cryptocurrency for all sorts of reasons. We can’t quite get our heads around the idea that it makes sense to use something invented by a very shadowy and entirely unidentifiable entity. We can’t really understand the way bitcoin is mined (using computers to solve increasingly difficult maths problems). Our minds boggle every time we read about how mining for bitcoin uses as much energy as mining for gold. Then there is the scalability, the volatility and the difficulty of storage and use. Baffling.
Libra could solve all these problems. Facebook might be a bit shadowy but at least it exists as an accountable brand. And while we might not trust it as a standalone backer, we all quite clearly trust the likes of Mastercard, Visa and PayPal (all also founder members of the Libra Association) with our money.
Libra could be a sovereignty game-changer
There is lots of detail still to come on how Libra will work. While we wait for some of that, it is true that there are things to worry about. One is privacy. Christina Frankopan, special projects lead at colony.io and a senior adviser to Lazard, questions the use of the metadata Libra will throw up, given that Facebook may be able to “triangulate this with other data sets to give them unprecedented knowledge of consumer behaviour and spending”.
If you are worried about the way financial apps might use data on your spending patterns, you should be really worried about how a vast social network morphing into a financial network might use it.
Anyone with your social media data can guess what you might buy. Anyone with your financial data knows already. Longer term there are the huge issues of what happens if Libra were to become genuinely successful. How does that affect national sovereignty?
Bitcoin has caused endless angst among central bankers but it hasn’t much bothered governments. That’s partly because, so far, it has been marginal stuff. But also because it hasn’t really acted as a currency. It isn’t a particularly effective or scalable means of payment, since almost no one actually uses it. It is a hopeless store of value – huge unpredictable swings don’t work for most buyers or sellers.
And it has not become a value reference in itself. If you have bitcoin you think about their value not in bitcoin but in dollars. Libra could be entirely different, particularly in the last sense. If it really is based on a basket of currencies and is stable as a result it might not take long at all for us to refer to the value of things in Libras. A Libra could just be a Libra. That is a sovereignty game changer.
Libra could work precisely because it isn’t a cryptocurrency
But let’s put all this to one side. Stop thinking about Libra as if you were a cryptocurrency expert. Start thinking about it as a consumer and you can see why it might work.
We haven’t adopted bitcoin or any other cryptocurrency for all sorts of reasons. We can’t quite get our heads around the idea that it makes sense to use something invented by a very shadowy and entirely unidentifiable entity. We can’t really understand the way bitcoin is mined (using computers to solve increasingly difficult maths problems). Our minds boggle every time we read about how mining for bitcoin uses as much energy as mining for gold. Then there is the scalability, the volatility and the difficulty of storage and use. Baffling.
Libra could solve all these problems. Facebook might be a bit shadowy but at least it exists as an accountable brand. And while we might not trust it as a standalone backer, we all quite clearly trust the likes of Mastercard, Visa and PayPal (all also founder members of the Libra Association) with our money.
Image: uncredited
Bill Gates' Biggest Mistake
Microsoft co-founder Bill Gates recently gave a wide-ranging interview to VC firm Village Global, and at one point, the topic of mobile came up. Gates revealed his biggest regret while at Microsoft was a failure to lead Microsoft into a solid position in the smartphone wars.
In the software world—particularly for platforms—these are winner-take-all markets. So, you know, the greatest mistake ever is whatever mismanagement I engaged in that caused Microsoft not to be what Android is. That is, Android is the standard non-Apple phone platform. That was a natural thing for Microsoft to win, and you know it really is winner-take-all. If you're there with half as many apps or 90 percent as many apps, you're on your way to complete doom. There's room for exactly one non-Apple operating system. And what's that worth? Four hundred billion? That would be transferred from Company G to Company M. And it's amazing to me having made one of the greatest mistakes of all time—and there was this antitrust lawsuit and various things—our other assets—Windows, Office—are still very strong. So we are a leading company. If we'd got that one right, we would be the leading company. But oh well.In the interview, Gates takes full responsibility for not reacting to the new era of smartphones. But by that time, he already had a foot out the door at Microsoft to focus on the Bill & Melinda Gates Foundation. The original iPhone came out in 2007, and the first Android device was released in 2008. Gates had already announced his transition plan in June 2006.
The CEO of Microsoft at the time was Steve Ballmer, who famously laughed at the iPhone and called the $500 device "The most expensive phone in the world" while deriding its lack of a hardware keyboard. "There's no chance that the iPhone is going to get any significant market share," Ballmer once told USA Today. "No chance."
Apple went on to sell over 2 billion iPhones.
The launch of the iPhone was a huge inflection point in the tech landscape, and the way companies reacted to it would shape their fortune for years to come. Unlike Microsoft, Google took the iPhone seriously. Google was investing in mobile before the iPhone was announced, having acquired Andy Rubin's Android, Inc. in 2005. The team was working on a Blackberry-style OS, but once the iPhone was announced, Google's mobile division realized it would need to "start over" on an all-touch interface in response. This decision eventually led to the launch of Android 1.0. (...)
Microsoft would eventually take on the iPhone and Android with Windows Phone, but its slow response and failure to recognize the modern smartphone revolution meant Windows Phone would only launch in late 2010. By then, it was too late. Google was throwing an unprecedented amount of resources behind its mobile efforts and, by 2010, had grown too powerful, with something like six major releases of Android and a suite of killer apps like Gmail, Search, YouTube, and Google Maps. Microsoft could build an OS, but it couldn't compete with Google's services. The Windows Phone was killed by the app gap.
Today, Android owns 85 percent of the smartphone OS market and is the most popular operating system in the world—mobile or otherwise—just ahead of Windows.
by Ron Amadeo, Ars Technica | Read more:
Image: Village Global
Tuesday, June 25, 2019
The Chronic-Pain Quandary
Amid a reckoning over opioids, a doctor crusades for caution in cutting back.
About four years ago, Dr. Stefan Kertesz started hearing that patients who had been taking opioid painkillers for years were being taken off their medications. Sometimes it was an aggressive reduction they weren’t on board with, sometimes it was all at once. Clinicians told patients they no longer felt comfortable treating them.
Kertesz, a primary care physician who also specializes in addiction medicine, had not spent his career investigating long-term opioid use or chronic pain. But he grew concerned by the medical community’s efforts to regain control over prescribing patterns after years of lax distribution. Limiting prescriptions for new patients had clear benefits, he thought, but he wondered about the results of reductions among “legacy patients.” Their outcomes weren’t being tracked.
Now, Kertesz is a leading advocate against policies that call for aggressive reductions in long-term opioid prescriptions or have resulted in forced cutbacks. He argues that well-intentioned initiatives to avoid the mistakes of the past have introduced new problems. He’s warned that clinicians’ decisions are destabilizing patients’ lives and leaving them in pain — and in some cases could drive patients to obtain opioids illicitly or even take their lives.
“I think I’m particularly provoked by situations where harm is done in the name of helping,” Kertesz said. “What really gets me is when responsible parties say we will protect you, and then they call upon us to harm people.”
It’s a case that Kertesz, 52, has tried to make with nuance and precision, bounded by an emphasis on the history of overprescribing and the benefits of tapering for patients for whom it works. But against a backdrop of tens of thousands of opioid overdose deaths each year and an ongoing reckoning about the roots of the opioid addiction crisis, it’s the dialectical equivalent of pinning the tail on a bucking bronco. Kertesz’s critics have questioned his motives. He’s heard he’s been called “the candyman.” (...)
Opioid prescribing has been declining since 2012, though levels remain higher than they were two decades ago. Today, depending on the estimate, anywhere from 8 million to 18 million Americans take opioids for chronic pain.
The interest in reducing their dosages is predicated in part on efforts to minimize patients’ risk of overdose and addiction. But there are other considerations. Enduring opioid use makes people more sensitive to pain, many experts believe. Opioid use has also been associated with anxiety, depression, and other health issues.
Plus, as people become dependent, the drugs might just be staving off symptoms of withdrawal that would come without another dose, rather than treating the original source of pain.
In short, experts say, long-term opioid use is not good medicine.
Kertesz, who is also a professor at the University of Alabama at Birmingham School of Medicine, agrees with all of that. But he believes that lowering dosages will hurt some patients who are leading functional lives on opioids, and that top-down strategies won’t protect them.
So, in 2015, when the Centers for Disease Control and Prevention proposed prescribing guidelines for primary care clinicians treating chronic pain, Kertesz grew nervous.
The guidelines, a set of measured recommendations finalized in March 2016, suggested clinicians try other therapies for pain before moving to opioids and prescribe only the lowest effective dose and duration of the drugs. (The guidelines do not apply to end-of-life or cancer care.) For patients on high doses, the guidelines said, “If benefits do not outweigh harms of continued opioid therapy, clinicians should optimize other therapies and work with patients to taper opioids to lower dosages or to taper and discontinue opioids.”
“Our day-to-day practice aligns with nearly all principles laid out in the guideline,” Kertesz wrote in a comment he submitted on the draft. But he cautioned the voluntary recommendations could be implemented too stringently by others.
“This is a guideline like no other … its guidance will affect the immediate well-being of millions of Americans with chronic pain,” he wrote.
After the release of the guidelines, Kertesz started seeing ripple effects. In early 2017, federal officials unveiled a Medicare proposal that would have blocked prescriptions higher than 90 MME without a special review. Around the same time, the National Committee for Quality Assurance considered docking clinicians’ scores if they had patients on high doses for long periods.
Kertesz, other experts, and some medical societies protested such proposals, contending they invoked the CDC guidelines while violating them.
“Most of us wish to see an evolution toward fewer opioid starts and fewer patients at high doses,” Kertesz and colleagues wrote in response to the NCQA plan. “The proposed NCQA measure indulges no such subtleties.”
The discussion overall has been hindered by limited research, including evidence for the benefits of forced tapering. But as of October 2018, 33 states had codified some prescription limits into law. Pharmacies and insurers capped prescriptions at 90 MME. Law enforcement agencies warned high prescribers.
Some initiatives have focused on avoiding “new starts,” not on tapering legacy patients. But Kertesz and other advocates argued the pressure of all the policies and warnings inculcated an anxiety around prescribing.
Chronic pain patients were seen as legally risky and medically complicated, so they had trouble finding providers.
Kertesz and his allies raised their concerns in the popular and academic presses and at conferences, building momentum over the years. They collected anecdotes from patients who said they had been harmed in some way by dose reductions or involuntary tapers.
“It is imperative that healthcare professionals and administrators realize that the Guideline does not endorse mandated involuntary dose reduction or discontinuation,” read a March letter co-authored by Kertesz calling on the CDC to reiterate its recommendations were not binding. The letter continued: “Patients have endured not only unnecessary suffering, but some have turned to suicide or illicit substance use.”
More than 300 patient advocates and experts, including three former White House drug czars, signed it.
About four years ago, Dr. Stefan Kertesz started hearing that patients who had been taking opioid painkillers for years were being taken off their medications. Sometimes it was an aggressive reduction they weren’t on board with, sometimes it was all at once. Clinicians told patients they no longer felt comfortable treating them.
Kertesz, a primary care physician who also specializes in addiction medicine, had not spent his career investigating long-term opioid use or chronic pain. But he grew concerned by the medical community’s efforts to regain control over prescribing patterns after years of lax distribution. Limiting prescriptions for new patients had clear benefits, he thought, but he wondered about the results of reductions among “legacy patients.” Their outcomes weren’t being tracked.
Now, Kertesz is a leading advocate against policies that call for aggressive reductions in long-term opioid prescriptions or have resulted in forced cutbacks. He argues that well-intentioned initiatives to avoid the mistakes of the past have introduced new problems. He’s warned that clinicians’ decisions are destabilizing patients’ lives and leaving them in pain — and in some cases could drive patients to obtain opioids illicitly or even take their lives.
“I think I’m particularly provoked by situations where harm is done in the name of helping,” Kertesz said. “What really gets me is when responsible parties say we will protect you, and then they call upon us to harm people.”
It’s a case that Kertesz, 52, has tried to make with nuance and precision, bounded by an emphasis on the history of overprescribing and the benefits of tapering for patients for whom it works. But against a backdrop of tens of thousands of opioid overdose deaths each year and an ongoing reckoning about the roots of the opioid addiction crisis, it’s the dialectical equivalent of pinning the tail on a bucking bronco. Kertesz’s critics have questioned his motives. He’s heard he’s been called “the candyman.” (...)
Opioid prescribing has been declining since 2012, though levels remain higher than they were two decades ago. Today, depending on the estimate, anywhere from 8 million to 18 million Americans take opioids for chronic pain.
The interest in reducing their dosages is predicated in part on efforts to minimize patients’ risk of overdose and addiction. But there are other considerations. Enduring opioid use makes people more sensitive to pain, many experts believe. Opioid use has also been associated with anxiety, depression, and other health issues.
Plus, as people become dependent, the drugs might just be staving off symptoms of withdrawal that would come without another dose, rather than treating the original source of pain.
In short, experts say, long-term opioid use is not good medicine.
Kertesz, who is also a professor at the University of Alabama at Birmingham School of Medicine, agrees with all of that. But he believes that lowering dosages will hurt some patients who are leading functional lives on opioids, and that top-down strategies won’t protect them.
So, in 2015, when the Centers for Disease Control and Prevention proposed prescribing guidelines for primary care clinicians treating chronic pain, Kertesz grew nervous.
The guidelines, a set of measured recommendations finalized in March 2016, suggested clinicians try other therapies for pain before moving to opioids and prescribe only the lowest effective dose and duration of the drugs. (The guidelines do not apply to end-of-life or cancer care.) For patients on high doses, the guidelines said, “If benefits do not outweigh harms of continued opioid therapy, clinicians should optimize other therapies and work with patients to taper opioids to lower dosages or to taper and discontinue opioids.”
“Our day-to-day practice aligns with nearly all principles laid out in the guideline,” Kertesz wrote in a comment he submitted on the draft. But he cautioned the voluntary recommendations could be implemented too stringently by others.
“This is a guideline like no other … its guidance will affect the immediate well-being of millions of Americans with chronic pain,” he wrote.
After the release of the guidelines, Kertesz started seeing ripple effects. In early 2017, federal officials unveiled a Medicare proposal that would have blocked prescriptions higher than 90 MME without a special review. Around the same time, the National Committee for Quality Assurance considered docking clinicians’ scores if they had patients on high doses for long periods.
Kertesz, other experts, and some medical societies protested such proposals, contending they invoked the CDC guidelines while violating them.
“Most of us wish to see an evolution toward fewer opioid starts and fewer patients at high doses,” Kertesz and colleagues wrote in response to the NCQA plan. “The proposed NCQA measure indulges no such subtleties.”
The discussion overall has been hindered by limited research, including evidence for the benefits of forced tapering. But as of October 2018, 33 states had codified some prescription limits into law. Pharmacies and insurers capped prescriptions at 90 MME. Law enforcement agencies warned high prescribers.
Some initiatives have focused on avoiding “new starts,” not on tapering legacy patients. But Kertesz and other advocates argued the pressure of all the policies and warnings inculcated an anxiety around prescribing.
Chronic pain patients were seen as legally risky and medically complicated, so they had trouble finding providers.
Kertesz and his allies raised their concerns in the popular and academic presses and at conferences, building momentum over the years. They collected anecdotes from patients who said they had been harmed in some way by dose reductions or involuntary tapers.
“It is imperative that healthcare professionals and administrators realize that the Guideline does not endorse mandated involuntary dose reduction or discontinuation,” read a March letter co-authored by Kertesz calling on the CDC to reiterate its recommendations were not binding. The letter continued: “Patients have endured not only unnecessary suffering, but some have turned to suicide or illicit substance use.”
More than 300 patient advocates and experts, including three former White House drug czars, signed it.
by Andrew Joseph, STAT | Read more:
Image: Tamika Moore
[ed. Read the comments. America has a schizophrenic problem when it comes to mood-altering drugs (see here, here and here). Unfortunately, pain killers fall into this category. If the the ongoing 'War on Drugs' (and Prohibition before it) taught us anything, it's that targeting supply while ignoring demand is a recipe for failure (with sometimes horrific unintended consequences). People are dying not because drugs are easily available but because they aren't, and this uncontrolled environment creates an opportunity for all kinds of other Bad Things to happen (eg. flourishing crime organizations, dangerously adulterated products, property crimes, soaring suicide rates, etc.). The government and medical community's message: we want you to feel better, but not too good (and if unrelieved pain causes you to self-medicate, stick to approved drugs like alcohol, tobacco and anti-depressants; or just get more exercise, think positive thoughts and meditate your way out of the pain). One might reasonably ask why people need to escape reality in the first place (and if that's inherently a bad thing or just normal human behavior). Nearly every culture on earth since humans came onto the scene has had some form of mood-altering drug(s) as a component. See also: The Government's Cure for the Opioid Epidemic May Be Worse Than the Disease (Reason), and Faced with an outcry over limits on opioids, authors of CDC guidelines acknowledge they’ve been misapplied (STAT).]
Subscribe to:
Posts (Atom)