Friday, August 30, 2019
Thursday, August 29, 2019
Is My Millennial Co-Worker a Narcissist, or Am I a Jealous Jerk?
Millennials’ Revolt
If you are interested in taking classes and attending conferences, why not take your company up on its ability to pay for them? If you’re not in a position to attend because of your family commitments, that’s O.K. too, but it doesn’t mean your colleague needs to stop attending. If opportunities aren’t being doled out unequally and you aren’t being forced to take on extra work to cover for their absence, whether they are an average performer or a superstar really doesn’t concern you. The fact that you are not responsible for this person’s work outcomes and that you are considering complaining to their supervisor — who is responsible for said work outcomes, and surely knows where their employee is on a given day — suggests it is not in fact about “adding value” but pure resentment.
This is the economic system’s fault, too. You’ve been set up to resent millennials just as much as we’ve been set up to resent you. The good news is that you can still break the cycle.
If you are genuinely curious about learning more from your co-worker’s experiences, try asking! Deliberately hoarding information would be a weird strategy; it seems far more likely that they don’t realize anyone would be interested. Might a friendly message asking if they’d be willing to have lunch and talk about some of the most interesting parts of the most recent conference benefit you both more than lingering resentment?
My co-worker seems to work more for their (I don’t want to specify gender) personal brand than for the company. This team member posts their whereabouts on Slack: They’re at a conference, at class (coursework tangential to their job), working from home! They keep us up to date on the minutiae of their travel (leaving at 11 a.m.! on a train without Wi-Fi until 7 p.m.!). They meet their goals, but I’m not privy to what their results look like — are they treading water or exceeding their goals?
I could be glad this younger co-worker is out and about so much, but the department doesn’t benefit in any way. (We’re in marketing.) When this co-worker reports on conferences, they don’t say how what they learned will help us.
Another co-worker and I try to sort out if we’re jealous. (We have family obligations and perhaps we’re a bit stodgy?) But I think if someone is getting smarter on the company dollar, they should share with their team. Instead, we’re on the outside, watching our co-worker flit from thing to thing, polishing their own brand.
Am I not thinking the new-think? Or is this person a workplace narcissist? Why does it bother us so much? What language can I use with co-worker’s supervisor and the department head that doesn’t make it seem like a personality issue, but about adding value to the organization? Or is it just that co-worker’s personality and mine are far apart and I should look for my own classes and conferences and polish my own brand?
What’s the balance between what’s good for the individual vs. good for the team?
— K.C.I’ve previously outed myself as a millennial in this column, and I suppose I should further disclose that I recently (and quite publicly) quit my job and got a new one thanks in part to my largely positive reputation in an industry known for absurd levels of upheaval. So! I am impressed by your colleague’s savvy brand-building, which I strongly suspect has less to do with narcissism than with their experiences making a career in a post-financial crisis world. I have never had a job that didn’t feel tenuous, which means I have never had the freedom to not obsess over my personal brand and whether I’m doing enough to burnish it through work, social media, skill-building and networking. Ofcourse we would rather quit Twitter and stop going to conferences and professional mixers and take all our vacation days and develop real hobbies and deeper human connections, but the entire economic system has shown us over and over that we cannot, because we will end up broke disappointments to everyone we know. (Malcolm Harris’s excellent book “Kids These Days,” which details how millennials were shaped by economic trauma, is a worthwhile read on this subject.)
If you are interested in taking classes and attending conferences, why not take your company up on its ability to pay for them? If you’re not in a position to attend because of your family commitments, that’s O.K. too, but it doesn’t mean your colleague needs to stop attending. If opportunities aren’t being doled out unequally and you aren’t being forced to take on extra work to cover for their absence, whether they are an average performer or a superstar really doesn’t concern you. The fact that you are not responsible for this person’s work outcomes and that you are considering complaining to their supervisor — who is responsible for said work outcomes, and surely knows where their employee is on a given day — suggests it is not in fact about “adding value” but pure resentment.
This is the economic system’s fault, too. You’ve been set up to resent millennials just as much as we’ve been set up to resent you. The good news is that you can still break the cycle.
If you are genuinely curious about learning more from your co-worker’s experiences, try asking! Deliberately hoarding information would be a weird strategy; it seems far more likely that they don’t realize anyone would be interested. Might a friendly message asking if they’d be willing to have lunch and talk about some of the most interesting parts of the most recent conference benefit you both more than lingering resentment?
by Megan Greenwell, NY Times | Read more:
Image: Margeaux Walter for The New York Times
[ed. I don't necessarily agree with this response, but if you're interested read the comments and decide for yourself.]
Stone Tools Suggest the First Americans Came From Japan
Evidence from the Cooper's Ferry archaeological site in Western Idaho shows that people lived in the Columbia River Basin around 16,000 years ago. That's well before a corridor between ice sheets opened up, clearing an inland route south from the Bering land bridge. That suggests that people migrated south along the Pacific coast. Stone tools from the site suggest a possible connection between these first Americans and Northeast Asian hunter-gatherers from the same period.
Route closed due to ice
A piece of charcoal unearthed in the lowest layer of sediment that contains artifacts is between 15,945 and 15,335 years old, according to radiocarbon dating. More charcoal, from the remains of an ancient hearth pit, dated to between 14,075 and 15,195 years old. A few other pieces of bone and charcoal returned radiocarbon dates in the 14,000- to 15,500-year-old range. In higher, more recent layers, archaeologists found bone and charcoal as recent as 8,000 years old, with a range of dates in between.
This makes clear that people had been using the Cooper's Ferry site for a very long time, but it's hard to say whether they stuck around or just kept coming back. "Because we did not excavate the entire site, it is difficult to know if people occupied the site continuously starting at 16,000 years ago," Oregon State University archaeologist Loren Davis told Ars. "I expect that this site was used on a seasonal basis, perhaps as a base camp for hunting, gathering, and fishing activities." (...)
Davis and his colleagues used a statistical model to calculate how old the very oldest layers of artifacts at the site should be. "The Bayesian model makes predictions about the age of the lower portion of [the excavated layers] based on the chronological trend of known radiocarbon ages in the upper and middle third," Davis explained. According to the model, the very oldest artifacts at Nipéhe are probably between 16,560 and 15,280 years old.
That's about 2,000 to 1,500 years before the great continent-spanning ice sheets of the Pleistocene began to break up. That break-up opened an ice-free corridor southward from the Bering land bridge between the towering sides of the Cordilleran and Laurentian ice sheets. According to computer simulations, that corridor was closed and buried under several kilometers of ice until at least 14,800 years ago, and possibly even later. And that has some important implications for when, and how, people first set foot in the Americas.
The coastal route
If the ice-free corridor wasn't open, the only way to get south of the ice sheets would have been to skirt along the Pacific coast on foot or by boat, moving among locations where the edges of the 4km (2.5 miles) thick glaciers didn't quite reach the Pacific Ocean. Much of Ice Age coastline is now underwater, largely thanks to the melting of those huge glaciers. But there have been a few recent archaeological finds that support the idea that the first humans in the Americans moved south along the coast much earlier than previously thought. (...)
A Japanese connection?
Buried in the Ice Age layers at Nipéhe, Davis and his colleagues found animal bones and discarded stone tools, including bifaces (two-sided handaxes; think of them as prehistoric multi-tools), blades, sharp stone flakes, and fragments of two projectile points. The tool collection didn't look a thing like the fluted projectile points that have become the archaeological calling card of the Clovis culture.
To make a Clovis-style projectile point, the flint-knapper has to chip off a flake from one or both faces at a point right at the base of the object. That creates a small groove (also called a flute), which makes it easier to fit the point onto the shaft of a spear or arrow. But at Nipéhe (and at a few other pre-Clovis sites in the Americas), people took the opposite approach: they shaped the base of the point into a stem to attach to the spear or arrow shaft. Some of the younger stone tools from Nipéhe are about the same age as the Clovis culture, but they're clearly a separate technology.
Stemmed projectile points aren't a recent technology, even by archaeological standards; people figured out that stems made points easier to haft by around 50,000 years ago in Africa, Asia, and the Levant. But there are different ways to shape a chunk of flint into a stemmed point, and the ones at Nipéhe look strikingly similar to stemmed points from Northeast Asia. Similarities are especially strong with items from the Japanese island of Hokkaido, which have turned up at sites dating between 16,000 and 13,000 years ago. (As an interesting side note, stemmed projectile points from a 13,500-year-old site in Kamchatka, in east Russia, were made with a distinctly different style.) (...)
Other aspects of the stone tools at Nipéhe also resemble the ones being made and used on Hokkaido at around the same time and slightly earlier. Davis and his colleagues claim that similarity is no coincidence. They suggest that the similar stone tool technology is evidence of a cultural link between the earliest Americans—who arrived on the Pacific coast and migrated southward before moving inland south of the ice sheets—and people in Northeastern Asia.
The dates line up well; many of the Hokkaido sites with stemmed points are older than Nipéhe, while others are around the same age.
Route closed due to ice
A piece of charcoal unearthed in the lowest layer of sediment that contains artifacts is between 15,945 and 15,335 years old, according to radiocarbon dating. More charcoal, from the remains of an ancient hearth pit, dated to between 14,075 and 15,195 years old. A few other pieces of bone and charcoal returned radiocarbon dates in the 14,000- to 15,500-year-old range. In higher, more recent layers, archaeologists found bone and charcoal as recent as 8,000 years old, with a range of dates in between.This makes clear that people had been using the Cooper's Ferry site for a very long time, but it's hard to say whether they stuck around or just kept coming back. "Because we did not excavate the entire site, it is difficult to know if people occupied the site continuously starting at 16,000 years ago," Oregon State University archaeologist Loren Davis told Ars. "I expect that this site was used on a seasonal basis, perhaps as a base camp for hunting, gathering, and fishing activities." (...)
Davis and his colleagues used a statistical model to calculate how old the very oldest layers of artifacts at the site should be. "The Bayesian model makes predictions about the age of the lower portion of [the excavated layers] based on the chronological trend of known radiocarbon ages in the upper and middle third," Davis explained. According to the model, the very oldest artifacts at Nipéhe are probably between 16,560 and 15,280 years old.
That's about 2,000 to 1,500 years before the great continent-spanning ice sheets of the Pleistocene began to break up. That break-up opened an ice-free corridor southward from the Bering land bridge between the towering sides of the Cordilleran and Laurentian ice sheets. According to computer simulations, that corridor was closed and buried under several kilometers of ice until at least 14,800 years ago, and possibly even later. And that has some important implications for when, and how, people first set foot in the Americas.
The coastal route
If the ice-free corridor wasn't open, the only way to get south of the ice sheets would have been to skirt along the Pacific coast on foot or by boat, moving among locations where the edges of the 4km (2.5 miles) thick glaciers didn't quite reach the Pacific Ocean. Much of Ice Age coastline is now underwater, largely thanks to the melting of those huge glaciers. But there have been a few recent archaeological finds that support the idea that the first humans in the Americans moved south along the coast much earlier than previously thought. (...)
A Japanese connection?
Buried in the Ice Age layers at Nipéhe, Davis and his colleagues found animal bones and discarded stone tools, including bifaces (two-sided handaxes; think of them as prehistoric multi-tools), blades, sharp stone flakes, and fragments of two projectile points. The tool collection didn't look a thing like the fluted projectile points that have become the archaeological calling card of the Clovis culture.
To make a Clovis-style projectile point, the flint-knapper has to chip off a flake from one or both faces at a point right at the base of the object. That creates a small groove (also called a flute), which makes it easier to fit the point onto the shaft of a spear or arrow. But at Nipéhe (and at a few other pre-Clovis sites in the Americas), people took the opposite approach: they shaped the base of the point into a stem to attach to the spear or arrow shaft. Some of the younger stone tools from Nipéhe are about the same age as the Clovis culture, but they're clearly a separate technology.
Stemmed projectile points aren't a recent technology, even by archaeological standards; people figured out that stems made points easier to haft by around 50,000 years ago in Africa, Asia, and the Levant. But there are different ways to shape a chunk of flint into a stemmed point, and the ones at Nipéhe look strikingly similar to stemmed points from Northeast Asia. Similarities are especially strong with items from the Japanese island of Hokkaido, which have turned up at sites dating between 16,000 and 13,000 years ago. (As an interesting side note, stemmed projectile points from a 13,500-year-old site in Kamchatka, in east Russia, were made with a distinctly different style.) (...)
Other aspects of the stone tools at Nipéhe also resemble the ones being made and used on Hokkaido at around the same time and slightly earlier. Davis and his colleagues claim that similarity is no coincidence. They suggest that the similar stone tool technology is evidence of a cultural link between the earliest Americans—who arrived on the Pacific coast and migrated southward before moving inland south of the ice sheets—and people in Northeastern Asia.
The dates line up well; many of the Hokkaido sites with stemmed points are older than Nipéhe, while others are around the same age.
by Kiona N. Smith, Ars Technica | Read more:
Image: Loren Davis
Reframing Superintelligence
Ten years ago, everyone was talking about superintelligence, the singularity, the robot apocalypse. What happened?
I think the main answer is: the field matured. Why isn’t everyone talking about nuclear security, biodefense, or counterterrorism? Because there are already competent institutions working on those problems, and people who are worried about them don’t feel the need to take their case directly to the public. The past ten years have seen AI goal alignment reach that level of maturity too. There are all sorts of new research labs, think tanks, and companies working on it – the Center For Human-Compatible AI at UC Berkeley, OpenAI, Ought, the Center For The Governance Of AI at Oxford, the Leverhulme Center For The Future Of Intelligence at Cambridge, etc. Like every field, it could still use more funding and talent. But it’s at a point where academic respectability trades off against public awareness at a rate where webzine articles saying CARE ABOUT THIS OR YOU WILL DEFINITELY DIE are less helpful.
One unhappy consequence of this happy state of affairs is that it’s harder to keep up with the field. In 2014, Nick Bostrom wrote Superintelligence: Paths, Dangers, Strategies, giving a readable overview of what everyone was thinking up to that point. Since then, things have been less public-facing, less readable, and more likely to be published in dense papers with a lot of mathematical notation. They’ve also been – no offense to everyone working on this – less revolutionary and less interesting.
This is one reason I was glad to come across Reframing Superintelligence: Comprehensive AI Services As General Intelligence by Eric Drexler, a researcher who works alongside Bostrom at Oxford’s Future of Humanity Institute. This 200 page report is not quite as readable as Superintelligence; its highly-structured outline form belies the fact that all of its claims start sounding the same after a while. But it’s five years more recent, and presents a very different vision of how future AI might look.
Drexler asks: what if future AI looks a lot like current AI, but better?
For example, take Google Translate. A future superintelligent Google Translate would be able to translate texts faster and better than any human translator, capturing subtleties of language beyond what even a native speaker could pick up. It might be able to understand hundreds of languages, handle complicated multilingual puns with ease, do all sorts of amazing things. But in the end, it would just be a translation app. It wouldn’t want to take over the world. It wouldn’t even “want” to become better at translating than it was already. It would just translate stuff really well.
The future could contain a vast ecosystem of these superintelligent services before any superintelligent agents arrive. It could have media services that can write books or generate movies to fit your personal tastes. It could have invention services that can design faster cars, safer rockets, and environmentally friendly power plants. It could have strategy services that can run presidential campaigns, steer Fortune 500 companies, and advise governments. All of them would be far more effective than any human at performing their given task. But you couldn’t ask the presidential-campaign-running service to design a rocket any more than you could ask Photoshop to run a spreadsheet.
In this future, our AI technology would have taken the same path as our physical technology. The human body can run fast, lift weights, and fight off enemies. But the automobile, crane, and gun are three different machines. Evolution had to cram running-ability, lifting-ability, and fighting-ability into the same body, but humans had more options and were able to do better by separating them out. In the same way, evolution had to cram book-writing, technology-inventing, and strategic-planning into the same kind of intelligence – an intelligence that also has associated goals and drives. But humans don’t have to do that, and we probably won’t. We’re not doing it today in 2019, when Google Translate and AlphaGo are two different AIs; there’s no reason to write a single AI that both translates languages and plays Go. And we probably won’t do it in the superintelligent future either. Any assumption that we will is based more on anthropomorphism than on a true understanding of intelligence.
These superintelligent services would be safer than general-purpose superintelligent agents. General-purpose superintelligent agents (from here on: agents) would need a human-like structure of goals and desires to operate independently in the world; Bostrom has explained ways this is likely to go wrong. AI services would just sit around algorithmically mapping inputs to outputs in a specific domain.
Superintelligent services would not self-improve. You could build an AI researching service – or, more likely, several different services to help with several different aspects of AI research – but each of them would just be good at solving certain AI research problems. It would still take human researchers to apply their insights and actually build something new. In theory you might be able to automate every single part of AI research, but it would be a weird idiosyncratic project that wouldn’t be anybody’s first choice.
Most important, superintelligent services could help keep the world safe from less benevolent AIs. Drexler agrees that a self-improving general purpose AI agent is possible, and assumes someone will build one eventually, if only for the lulz. He agrees this could go about the way Bostrom expects it to go, ie very badly. But he hopes that there will be a robust ecosystem of AI services active by then, giving humans superintelligent help in containing rogue AIs. Superintelligent anomaly detectors might be able to notice rogue agents causing trouble, superintelligent strategic planners might be able to develop plans for getting rid of them, and superintelligent military research AIs might be able to create weapons capable of fighting them off.
Drexler therefore does not completely dismiss Bostromian disaster scenarios, but thinks we should concentrate on the relatively mild failure modes of superintelligent AI services. These may involve normal bugs, where the AI has aberrant behaviors that don’t get caught in testing and cause a plane crash or something, but not the unsolveable catastrophes of the Bostromian paradigm. Drexler is more concerned about potential misuse by human actors – either illegal use by criminals and enemy militaries, or antisocial use to create things like an infinitely-addictive super-Facebook. He doesn’t devote a lot of space to these, and it looks like he hopes these can be dealt with through the usual processes, or by prosocial actors with superintelligent services on their side (thirty years from now, maybe people will say “it takes a good guy with an AI to stop a bad guy with an AI”).
This segues nicely into some similar concerns that OpenAI researcher Paul Christiano has brought up. He worries that AI services will be naturally better at satisfying objective criteria than at “making the world better” in some vague sense. Tasks like “maximize clicks to this site” or “maximize profits from this corporation” are objective criteria; tasks like “provide real value to users of this site instead of just clickbait” or “have this corporation act in a socially responsible way” are vague. That means AI may asymmetrically empower some of the worst tedencies in our society without giving a corresponding power increase to normal people just trying to live enjoyable lives. In his model, one of the tasks of AI safety research is to get AIs to be as good at optimizing vague prosocial tasks as they will naturally be at optimizing the bottom line. Drexler doesn’t specifically discuss this in Reframing Superintelligence, but it seems to fit the spirit of the kind of thing he’s concerned about.
I’m not sure how much of the AI alignment community is thinking in a Drexlerian vs. a Bostromian way, or whether that is even a real dichotomy that a knowledgeable person would talk about. I know there are still some people who are very concerned that even programs that seem to be innocent superintelligent services will be able to self-improve, develop misaligned goals, and cause catastrophes. I got to talk to Dr. Drexler a few years ago about some of this (although I hadn’t read the book at the time, didn’t understand the ideas very well, and probably made a fool of myself); at the time, he said that his work was getting a mixed reception. And there are still a few issues that confuse me.
I think the main answer is: the field matured. Why isn’t everyone talking about nuclear security, biodefense, or counterterrorism? Because there are already competent institutions working on those problems, and people who are worried about them don’t feel the need to take their case directly to the public. The past ten years have seen AI goal alignment reach that level of maturity too. There are all sorts of new research labs, think tanks, and companies working on it – the Center For Human-Compatible AI at UC Berkeley, OpenAI, Ought, the Center For The Governance Of AI at Oxford, the Leverhulme Center For The Future Of Intelligence at Cambridge, etc. Like every field, it could still use more funding and talent. But it’s at a point where academic respectability trades off against public awareness at a rate where webzine articles saying CARE ABOUT THIS OR YOU WILL DEFINITELY DIE are less helpful.
One unhappy consequence of this happy state of affairs is that it’s harder to keep up with the field. In 2014, Nick Bostrom wrote Superintelligence: Paths, Dangers, Strategies, giving a readable overview of what everyone was thinking up to that point. Since then, things have been less public-facing, less readable, and more likely to be published in dense papers with a lot of mathematical notation. They’ve also been – no offense to everyone working on this – less revolutionary and less interesting.This is one reason I was glad to come across Reframing Superintelligence: Comprehensive AI Services As General Intelligence by Eric Drexler, a researcher who works alongside Bostrom at Oxford’s Future of Humanity Institute. This 200 page report is not quite as readable as Superintelligence; its highly-structured outline form belies the fact that all of its claims start sounding the same after a while. But it’s five years more recent, and presents a very different vision of how future AI might look.
Drexler asks: what if future AI looks a lot like current AI, but better?
For example, take Google Translate. A future superintelligent Google Translate would be able to translate texts faster and better than any human translator, capturing subtleties of language beyond what even a native speaker could pick up. It might be able to understand hundreds of languages, handle complicated multilingual puns with ease, do all sorts of amazing things. But in the end, it would just be a translation app. It wouldn’t want to take over the world. It wouldn’t even “want” to become better at translating than it was already. It would just translate stuff really well.
The future could contain a vast ecosystem of these superintelligent services before any superintelligent agents arrive. It could have media services that can write books or generate movies to fit your personal tastes. It could have invention services that can design faster cars, safer rockets, and environmentally friendly power plants. It could have strategy services that can run presidential campaigns, steer Fortune 500 companies, and advise governments. All of them would be far more effective than any human at performing their given task. But you couldn’t ask the presidential-campaign-running service to design a rocket any more than you could ask Photoshop to run a spreadsheet.
In this future, our AI technology would have taken the same path as our physical technology. The human body can run fast, lift weights, and fight off enemies. But the automobile, crane, and gun are three different machines. Evolution had to cram running-ability, lifting-ability, and fighting-ability into the same body, but humans had more options and were able to do better by separating them out. In the same way, evolution had to cram book-writing, technology-inventing, and strategic-planning into the same kind of intelligence – an intelligence that also has associated goals and drives. But humans don’t have to do that, and we probably won’t. We’re not doing it today in 2019, when Google Translate and AlphaGo are two different AIs; there’s no reason to write a single AI that both translates languages and plays Go. And we probably won’t do it in the superintelligent future either. Any assumption that we will is based more on anthropomorphism than on a true understanding of intelligence.
These superintelligent services would be safer than general-purpose superintelligent agents. General-purpose superintelligent agents (from here on: agents) would need a human-like structure of goals and desires to operate independently in the world; Bostrom has explained ways this is likely to go wrong. AI services would just sit around algorithmically mapping inputs to outputs in a specific domain.
Superintelligent services would not self-improve. You could build an AI researching service – or, more likely, several different services to help with several different aspects of AI research – but each of them would just be good at solving certain AI research problems. It would still take human researchers to apply their insights and actually build something new. In theory you might be able to automate every single part of AI research, but it would be a weird idiosyncratic project that wouldn’t be anybody’s first choice.
Most important, superintelligent services could help keep the world safe from less benevolent AIs. Drexler agrees that a self-improving general purpose AI agent is possible, and assumes someone will build one eventually, if only for the lulz. He agrees this could go about the way Bostrom expects it to go, ie very badly. But he hopes that there will be a robust ecosystem of AI services active by then, giving humans superintelligent help in containing rogue AIs. Superintelligent anomaly detectors might be able to notice rogue agents causing trouble, superintelligent strategic planners might be able to develop plans for getting rid of them, and superintelligent military research AIs might be able to create weapons capable of fighting them off.
Drexler therefore does not completely dismiss Bostromian disaster scenarios, but thinks we should concentrate on the relatively mild failure modes of superintelligent AI services. These may involve normal bugs, where the AI has aberrant behaviors that don’t get caught in testing and cause a plane crash or something, but not the unsolveable catastrophes of the Bostromian paradigm. Drexler is more concerned about potential misuse by human actors – either illegal use by criminals and enemy militaries, or antisocial use to create things like an infinitely-addictive super-Facebook. He doesn’t devote a lot of space to these, and it looks like he hopes these can be dealt with through the usual processes, or by prosocial actors with superintelligent services on their side (thirty years from now, maybe people will say “it takes a good guy with an AI to stop a bad guy with an AI”).
This segues nicely into some similar concerns that OpenAI researcher Paul Christiano has brought up. He worries that AI services will be naturally better at satisfying objective criteria than at “making the world better” in some vague sense. Tasks like “maximize clicks to this site” or “maximize profits from this corporation” are objective criteria; tasks like “provide real value to users of this site instead of just clickbait” or “have this corporation act in a socially responsible way” are vague. That means AI may asymmetrically empower some of the worst tedencies in our society without giving a corresponding power increase to normal people just trying to live enjoyable lives. In his model, one of the tasks of AI safety research is to get AIs to be as good at optimizing vague prosocial tasks as they will naturally be at optimizing the bottom line. Drexler doesn’t specifically discuss this in Reframing Superintelligence, but it seems to fit the spirit of the kind of thing he’s concerned about.
I’m not sure how much of the AI alignment community is thinking in a Drexlerian vs. a Bostromian way, or whether that is even a real dichotomy that a knowledgeable person would talk about. I know there are still some people who are very concerned that even programs that seem to be innocent superintelligent services will be able to self-improve, develop misaligned goals, and cause catastrophes. I got to talk to Dr. Drexler a few years ago about some of this (although I hadn’t read the book at the time, didn’t understand the ideas very well, and probably made a fool of myself); at the time, he said that his work was getting a mixed reception. And there are still a few issues that confuse me.
by Scott Alexander, Slate Star Codex | Read more:
Image: via
Whatever Happened to Fun?
[ed. Repost. Classic.]
There’s a New Alaska State Record For Giant Pumpkins
Dale Marshall of Anchorage broke his previous state record by nearly 600 pounds when the pumpkin he entered this year tipped the scale at 2,051 pounds during the Alaska State Fair weigh-off Tuesday.
“It was mind blowing," Marshall said.
"I wasn’t even thinking 2,000 pounds. I thought it would weigh between 1,700 and 1,900 pounds using the tape measure method. In pumpkin growing land around the world that is an elite club to grow 2,000. Nobody has grown a pumpkin this size this far north in the world.”
Marshall said he thought weather was a big factor this year. “With all the sunny days I got plenty of heat in the greenhouse. The pumpkin is 89 days old. Nothing happens the first days. In 79 days, it grew to 2,051 pounds. That’s an average of 25 pounds per day. It grew 50 pounds a day in parts of July. Marshall said growing the pumpkin required at least 75 gallons of water a day, and as much as a couple hundred gallons a day.
“I still can’t believe it,” Marshall added.
“It was mind blowing," Marshall said.
"I wasn’t even thinking 2,000 pounds. I thought it would weigh between 1,700 and 1,900 pounds using the tape measure method. In pumpkin growing land around the world that is an elite club to grow 2,000. Nobody has grown a pumpkin this size this far north in the world.”Marshall said he thought weather was a big factor this year. “With all the sunny days I got plenty of heat in the greenhouse. The pumpkin is 89 days old. Nothing happens the first days. In 79 days, it grew to 2,051 pounds. That’s an average of 25 pounds per day. It grew 50 pounds a day in parts of July. Marshall said growing the pumpkin required at least 75 gallons of water a day, and as much as a couple hundred gallons a day.
“I still can’t believe it,” Marshall added.
by Bill Roth, ADN | Read more:
Image: Bill Roth
Wednesday, August 28, 2019
End War Or Mosquitoes?
Malaria may have killed half of all the people that ever lived. (more)
Over one million people die from malaria each year, mostly children under five years of age, with 90% of malaria cases occurring in Sub-Saharan Africa. (more)
378,000 people worldwide died a violent death in war each year between 1985 and 1994. (more)
Over the last day I’ve done two Twitter polls, one of which was my most popular poll ever. Each poll was on whether, if we had the option, we should try to end a big old nemesis of humankind. One was on mosquitoes, the other on war: (...)In both cases the main con argument is a worry about unintended side effects. Our biological and social systems are both very complex, with each part having substantial and difficult to understand interactions with many other parts. This makes it hard to be sure that an apparently bad thing isn’t actually causing good things, or preventing other bad things.
Poll respondents were about evenly divided on ending mosquitoes, but over 5 to 1 in favor of ending war. Yet mosquitoes kill many more people than do wars, mosquitoes are only a small part of our biosphere with only modest identifiable benefits, and war is a much larger part of key social systems with much easier to identify functions and benefits. For example, war drives innovation, deposes tyrants, and cleans out inefficient institutional cruft that accumulates during peacetime. All these considerations favor ending mosquitoes, relative to ending war.
Why then is there so much more support for ending war, relative to mosquitoes? The proximate cause seems obvious: in our world, good people oppose both war and also ending species. Most people probably aren’t thinking this through, but are instead just reacting to this surface ethical gloss. Okay, but why is murderous nature so much more popular than murderous features of human systems? Perhaps in part because we are much more eager to put moral blame on humans, relative to nature. Arguing to keep war makes you seem like allies of deeply evil humans, while arguing to keep mosquitoes only makes you allies of an indifferent nature, which makes you far less evil by association.
by Robin Hanson, Overcoming Bias | Read more:
US High School Sports Participation Drops
Led by a decline in football for the fifth straight year, participation in high school sports dropped in 2018-19 for the first time in 30 years, according to an annual survey conducted by the National Federation of State High School Associations.
The 2018-19 total of 7,937,491 participants was a decline of 43,395 from the 2017-18 school year, when the number of participants in high school sports reached a record high of 7,980,886.
The last decline in sports participation numbers occurred during the 1988-89 school year.
The group said 11-man football dropped by 30,829 to 1,006,013, the lowest mark since the 1999-2000 school year. It was the fifth consecutive year of declining football participation.
“We know from recent surveys that the number of kids involved in youth sports has been declining, and a decline in the number of public school students has been predicted for a number of years, so we knew our ‘streak’ might end someday,” Dr Karissa Niehoff, NFHS executive director, said in a statement. “The data from this year’s survey serves as a reminder that we have to work even harder in the coming years to involve more students in these vital programs – not only athletics but performing arts programs as well.”
Although the number of participants in boys’ 11-player football dropped, the number of schools offering the sport remained steady. The survey indicated that 14,247 schools offer 11-player football, an increase of 168 from last year. A comparison of the figures from the past two years indicates that the average number of boys involved in 11-player football on a per-school basis dropped from 73 to 70, which includes freshman, junior varsity and varsity teams.
While participation in boys’ 11-player football dropped in all but seven states, participation in six-, eight- and nine-player football gained 156 schools and 1,594 participants nationwide, with the largest increase in boys’ eight-player football from 19,554 to 20,954. In addition, in the past 10 years, participation by girls in 11-player football has doubled, from 1,249 in the 2009-10 school year to 2,404 last year.
“The survey certainly confirms that schools are not dropping the sport of football, which is great news,” Niehoff said. “Certainly, we are concerned about the reduction in the number of boys involved in the 11-player game but are thrilled that states are finding other options by starting six-player or eight-player football in situations where the numbers have declined.
The 2018-19 total of 7,937,491 participants was a decline of 43,395 from the 2017-18 school year, when the number of participants in high school sports reached a record high of 7,980,886.
The last decline in sports participation numbers occurred during the 1988-89 school year.
The group said 11-man football dropped by 30,829 to 1,006,013, the lowest mark since the 1999-2000 school year. It was the fifth consecutive year of declining football participation.
“We know from recent surveys that the number of kids involved in youth sports has been declining, and a decline in the number of public school students has been predicted for a number of years, so we knew our ‘streak’ might end someday,” Dr Karissa Niehoff, NFHS executive director, said in a statement. “The data from this year’s survey serves as a reminder that we have to work even harder in the coming years to involve more students in these vital programs – not only athletics but performing arts programs as well.”Although the number of participants in boys’ 11-player football dropped, the number of schools offering the sport remained steady. The survey indicated that 14,247 schools offer 11-player football, an increase of 168 from last year. A comparison of the figures from the past two years indicates that the average number of boys involved in 11-player football on a per-school basis dropped from 73 to 70, which includes freshman, junior varsity and varsity teams.
While participation in boys’ 11-player football dropped in all but seven states, participation in six-, eight- and nine-player football gained 156 schools and 1,594 participants nationwide, with the largest increase in boys’ eight-player football from 19,554 to 20,954. In addition, in the past 10 years, participation by girls in 11-player football has doubled, from 1,249 in the 2009-10 school year to 2,404 last year.
“The survey certainly confirms that schools are not dropping the sport of football, which is great news,” Niehoff said. “Certainly, we are concerned about the reduction in the number of boys involved in the 11-player game but are thrilled that states are finding other options by starting six-player or eight-player football in situations where the numbers have declined.
by AP, The Guardian | Read more:
Image: Don Campbell/AP
[ed. I think contact sports in high school will continue to decline for medical reasons - but mostly because they aren't such a direct link to popularity anymore.]
Tuesday, August 27, 2019
The Scientific Way to Get Over A Break-Up
Something strange happens when we break up. A concoction of memories and thoughts occupy our minds — anything from “my life is over” to “I’ll make the best of my regained freedom”. There are self-doubts and pain, to the regular chanting of “this sucks”. Yes, it does suck. Break-ups suck for the person being dumped and they suck for the person doing the nasty deed.
A break-up may be cruel or cordial; rarely it can be neutral. These are not our finest hours. And the longer a relationship lasted, the harder it will be to get over it. You’ve shared a life, your dreams, a home, a sense of self. And suddenly you find yourself in the throes of a refurbishment project you didn’t even ask for. But understanding the scientific basis of break-ups — why people do it and how they get over it alongside the neuroscientific underpinnings of heartbreak — may offer an opportunity for self-analysis.
Knowing why you feel the way you feel may provide some much-needed perspective; the necessary distance to re-examine your thoughts. Will your new scientific appreciation work wonders and lift you back into the realm of those who have got it all together? Hardly, because yes, getting over a partner takes time (and I’ll explain why below). But it serves as a nice reminder that perhaps there is no magic here. Yes, perhaps heartbreak is but a melting pot of thoughts and brain chemicals.
Why we break up
Every relationship is unique, and you will have your reasons for calling it quits (or your other half will). But according to research there are eight main arguments for a break-up: the desire to be more autonomous, not sharing the same interests or character traits, not being supportive enough, not being open enough, not being loyal, not spending enough time together, not being fair enough to each other, and the loss of romance. Chances are your break-up falls within multiple of these categories. (Interestingly, for women, autonomy is one of the main reasons for a break-up.)
If you’re mending a broken heart at this moment, realize that you did not have control over how your partner felt. They arrived at this conclusion for a reason and it may not even be a good reason, but that’s not debatable.
Yet according to scientific evidence, how long a relationship may last can be (somewhat) predicted. When Galena K. Rhoades at the University of Denver, U.S., began to study relationship commitment, she couldn’t have known just how much constraining factors matter. So what are ‘constraint commitments’? They’re restrictions which make us more committed to staying in a relationship. Rhoades proposed three types:
1. Perceived constraints, which include external factors. They include social pressures to stay together or the feeling that you invested a lot into a relationship. Maybe you think that your life as you know it will come to a halt or you’re worried about your partner’s mental health.
2. Material constraints include financial and physical pressures, such as owning a property or a pet together, sharing furniture or a bank account.
3. Felt constraints describe the feeling of being trapped or stuck in a relationship.
Rhoades recruited 1184 individuals between the ages of 18 to 35 years, all of whom were in a relationship of at least two months. Over eight months, participants received two rounds of questionnaires to examine their dedication to their partners and the three constraints. Twenty-six percent of the relationships ended within the time frame of the study. What the authors found was sobering.
They noted that fewer perceived or material constraints and higher felt constraints could explain break-ups. Let that sink in. It means that couples who feel social pressure or live in shared accommodation are less likely to break up. In other words, we use our partnerships to give us a sense of emotional or material stability. But then again, perhaps that’s what relationships are all about to begin with? For those who feel trapped, chances are higher that things will come to an end.
It’s a romantic concept to believe that love is at the core of a long-lasting relationship. And pondering the constraints of your past relationship may illuminate some of its shortcomings.
How we break up
Break-ups aren’t accidents. The reflections that ultimately lead to someone cutting ties do not happen one day to the next (unless infidelity is involved). If you’re heartbroken, remind yourself that your partner arrived at his or her conclusion likely after a substantial amount of time.
Just how complex and expansive the process of separation can be, shows an analysis of individual break-up points. The authors of the research identified 16 steps that occur before the final break-up (see graphic). And though these events don’t always happen in this order, it may comfort you to know that you mattered. Yes, you mattered enough for your ex to take time to mull over the end of the relationship.
Often, one senses the expiration date is drawing near — like some minor irritation or a growing nervousness. But these suspicions are based on concrete warning signs. According to research by Aalto University in Finland, such signals even extend to social media. The scientists studied data from social networks (mostly Twitter) to detect break-up patterns. They found that heartbreakers-to-be sent fewer messages to their partners, but more messages to other users. Overall, the number of messages they shared online went down. Withdrawal is a classic symptom of looming separation — even on social media.
When it comes to the final act of breaking up, things tend to be more messy than civil. Fifty-eight percent of Americans said their relationship took a dramatic end. Only a quarter of couples ended things in a civil manner. In the digital age, it may be reassuring to know that the majority of people still have the decency to break up face-to-face (57%), although younger generations are more likely to use text messaging (34%). Damn you technology! Perhaps certain things shouldn’t be that easy.
by Anne Freier, Medium | Read more:
Image:Hearts live by being wounded. — Oscar Wilde
[ed. If your relationship is struggling (or you want to prevent that), consider picking up a copy of Hold Me Tight, by Sue Johnson.]
A break-up may be cruel or cordial; rarely it can be neutral. These are not our finest hours. And the longer a relationship lasted, the harder it will be to get over it. You’ve shared a life, your dreams, a home, a sense of self. And suddenly you find yourself in the throes of a refurbishment project you didn’t even ask for. But understanding the scientific basis of break-ups — why people do it and how they get over it alongside the neuroscientific underpinnings of heartbreak — may offer an opportunity for self-analysis.
Knowing why you feel the way you feel may provide some much-needed perspective; the necessary distance to re-examine your thoughts. Will your new scientific appreciation work wonders and lift you back into the realm of those who have got it all together? Hardly, because yes, getting over a partner takes time (and I’ll explain why below). But it serves as a nice reminder that perhaps there is no magic here. Yes, perhaps heartbreak is but a melting pot of thoughts and brain chemicals.
Why we break up
Every relationship is unique, and you will have your reasons for calling it quits (or your other half will). But according to research there are eight main arguments for a break-up: the desire to be more autonomous, not sharing the same interests or character traits, not being supportive enough, not being open enough, not being loyal, not spending enough time together, not being fair enough to each other, and the loss of romance. Chances are your break-up falls within multiple of these categories. (Interestingly, for women, autonomy is one of the main reasons for a break-up.)
If you’re mending a broken heart at this moment, realize that you did not have control over how your partner felt. They arrived at this conclusion for a reason and it may not even be a good reason, but that’s not debatable.
Yet according to scientific evidence, how long a relationship may last can be (somewhat) predicted. When Galena K. Rhoades at the University of Denver, U.S., began to study relationship commitment, she couldn’t have known just how much constraining factors matter. So what are ‘constraint commitments’? They’re restrictions which make us more committed to staying in a relationship. Rhoades proposed three types:
1. Perceived constraints, which include external factors. They include social pressures to stay together or the feeling that you invested a lot into a relationship. Maybe you think that your life as you know it will come to a halt or you’re worried about your partner’s mental health.
2. Material constraints include financial and physical pressures, such as owning a property or a pet together, sharing furniture or a bank account.
3. Felt constraints describe the feeling of being trapped or stuck in a relationship.
Rhoades recruited 1184 individuals between the ages of 18 to 35 years, all of whom were in a relationship of at least two months. Over eight months, participants received two rounds of questionnaires to examine their dedication to their partners and the three constraints. Twenty-six percent of the relationships ended within the time frame of the study. What the authors found was sobering.
They noted that fewer perceived or material constraints and higher felt constraints could explain break-ups. Let that sink in. It means that couples who feel social pressure or live in shared accommodation are less likely to break up. In other words, we use our partnerships to give us a sense of emotional or material stability. But then again, perhaps that’s what relationships are all about to begin with? For those who feel trapped, chances are higher that things will come to an end.
It’s a romantic concept to believe that love is at the core of a long-lasting relationship. And pondering the constraints of your past relationship may illuminate some of its shortcomings.
How we break up
Break-ups aren’t accidents. The reflections that ultimately lead to someone cutting ties do not happen one day to the next (unless infidelity is involved). If you’re heartbroken, remind yourself that your partner arrived at his or her conclusion likely after a substantial amount of time.
Just how complex and expansive the process of separation can be, shows an analysis of individual break-up points. The authors of the research identified 16 steps that occur before the final break-up (see graphic). And though these events don’t always happen in this order, it may comfort you to know that you mattered. Yes, you mattered enough for your ex to take time to mull over the end of the relationship.
Often, one senses the expiration date is drawing near — like some minor irritation or a growing nervousness. But these suspicions are based on concrete warning signs. According to research by Aalto University in Finland, such signals even extend to social media. The scientists studied data from social networks (mostly Twitter) to detect break-up patterns. They found that heartbreakers-to-be sent fewer messages to their partners, but more messages to other users. Overall, the number of messages they shared online went down. Withdrawal is a classic symptom of looming separation — even on social media.
When it comes to the final act of breaking up, things tend to be more messy than civil. Fifty-eight percent of Americans said their relationship took a dramatic end. Only a quarter of couples ended things in a civil manner. In the digital age, it may be reassuring to know that the majority of people still have the decency to break up face-to-face (57%), although younger generations are more likely to use text messaging (34%). Damn you technology! Perhaps certain things shouldn’t be that easy.
Image:Hearts live by being wounded. — Oscar Wilde
[ed. If your relationship is struggling (or you want to prevent that), consider picking up a copy of Hold Me Tight, by Sue Johnson.]
The Extortion Economy: How Insurance Companies Are Fueling a Rise in Ransomware Attacks
On June 24, the mayor and council of Lake City, Florida, gathered in an emergency session to decide how to resolve a ransomware attack that had locked the city’s computer files for the preceding fortnight. Following the Pledge of Allegiance, Mayor Stephen Witt led an invocation. “Our heavenly father,” Witt said, “we ask for your guidance today, that we do what’s best for our city and our community.”
Witt and the council members also sought guidance from City Manager Joseph Helfenberger. He recommended that the city allow its cyber insurer, Beazley, an underwriter at Lloyd’s of London, to pay the ransom of 42 bitcoin, then worth about $460,000. Lake City, which was covered for ransomware under its cyber-insurance policy, would only be responsible for a $10,000 deductible. In exchange for the ransom, the hacker would provide a key to unlock the files.
“If this process works, it would save the city substantially in both time and money,” Helfenberger told them.
Without asking questions or deliberating, the mayor and the council unanimously approved paying the ransom. The six-figure payment, one of several that U.S. cities have handed over to hackers in recent months to retrieve files, made national headlines.
Left unmentioned in Helfenberger’s briefing was that the city’s IT staff, together with an outside vendor, had been pursuing an alternative approach. Since the attack, they had been attempting to recover backup files that were deleted during the incident. On Beazley’s recommendation, the city chose to pay the ransom because the cost of a prolonged recovery from backups would have exceeded its $1 million coverage limit, and because it wanted to resume normal services as quickly as possible.
“Our insurance company made [the decision] for us,” city spokesman Michael Lee, a sergeant in the Lake City Police Department, said. “At the end of the day, it really boils down to a business decision on the insurance side of things: them looking at how much is it going to cost to fix it ourselves and how much is it going to cost to pay the ransom.”
The mayor, Witt, said in an interview that he was aware of the efforts to recover backup files but preferred to have the insurer pay the ransom because it was less expensive for the city. “We pay a $10,000 deductible, and we get back to business, hopefully,” he said. “Or we go, ‘No, we’re not going to do that,’ then we spend money we don’t have to just get back up and running. And so to me, it wasn’t a pleasant decision, but it was the only decision.”
Ransomware is proliferating across America, disabling computer systems of corporations, city governments, schools and police departments. This month, attackers seeking millions of dollars encrypted the files of 22 Texas municipalities. Overlooked in the ransomware spree is the role of an industry that is both fueling and benefiting from it: insurance. In recent years, cyber insurance sold by domestic and foreign companies has grown into an estimated $7 billion to $8 billion-a-year market in the U.S. alone, according to Fred Eslami, an associate director at AM Best, a credit rating agency that focuses on the insurance industry. While insurers do not release information about ransom payments, ProPublica has found that they often accommodate attackers’ demands, even when alternatives such as saved backup files may be available.
The FBI and security researchers say paying ransoms contributes to the profitability and spread of cybercrime and in some cases may ultimately be funding terrorist regimes. But for insurers, it makes financial sense, industry insiders said. It holds down claim costs by avoiding expenses such as covering lost revenue from snarled services and ongoing fees for consultants aiding in data recovery. And, by rewarding hackers, it encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.
“The onus isn’t on the insurance company to stop the criminal, that’s not their mission. Their objective is to help you get back to business. But it does beg the question, when you pay out to these criminals, what happens in the future?” said Loretta Worters, spokeswoman for the Insurance Information Institute, a nonprofit industry group based in New York. Attackers “see the deep pockets. You’ve got the insurance industry that’s going to pay out, this is great.”
by Renee Dudley, ProPublica | Read more:
Image: Jack Taylor/Getty Images
Witt and the council members also sought guidance from City Manager Joseph Helfenberger. He recommended that the city allow its cyber insurer, Beazley, an underwriter at Lloyd’s of London, to pay the ransom of 42 bitcoin, then worth about $460,000. Lake City, which was covered for ransomware under its cyber-insurance policy, would only be responsible for a $10,000 deductible. In exchange for the ransom, the hacker would provide a key to unlock the files.
“If this process works, it would save the city substantially in both time and money,” Helfenberger told them.
Without asking questions or deliberating, the mayor and the council unanimously approved paying the ransom. The six-figure payment, one of several that U.S. cities have handed over to hackers in recent months to retrieve files, made national headlines.Left unmentioned in Helfenberger’s briefing was that the city’s IT staff, together with an outside vendor, had been pursuing an alternative approach. Since the attack, they had been attempting to recover backup files that were deleted during the incident. On Beazley’s recommendation, the city chose to pay the ransom because the cost of a prolonged recovery from backups would have exceeded its $1 million coverage limit, and because it wanted to resume normal services as quickly as possible.
“Our insurance company made [the decision] for us,” city spokesman Michael Lee, a sergeant in the Lake City Police Department, said. “At the end of the day, it really boils down to a business decision on the insurance side of things: them looking at how much is it going to cost to fix it ourselves and how much is it going to cost to pay the ransom.”
The mayor, Witt, said in an interview that he was aware of the efforts to recover backup files but preferred to have the insurer pay the ransom because it was less expensive for the city. “We pay a $10,000 deductible, and we get back to business, hopefully,” he said. “Or we go, ‘No, we’re not going to do that,’ then we spend money we don’t have to just get back up and running. And so to me, it wasn’t a pleasant decision, but it was the only decision.”
Ransomware is proliferating across America, disabling computer systems of corporations, city governments, schools and police departments. This month, attackers seeking millions of dollars encrypted the files of 22 Texas municipalities. Overlooked in the ransomware spree is the role of an industry that is both fueling and benefiting from it: insurance. In recent years, cyber insurance sold by domestic and foreign companies has grown into an estimated $7 billion to $8 billion-a-year market in the U.S. alone, according to Fred Eslami, an associate director at AM Best, a credit rating agency that focuses on the insurance industry. While insurers do not release information about ransom payments, ProPublica has found that they often accommodate attackers’ demands, even when alternatives such as saved backup files may be available.
The FBI and security researchers say paying ransoms contributes to the profitability and spread of cybercrime and in some cases may ultimately be funding terrorist regimes. But for insurers, it makes financial sense, industry insiders said. It holds down claim costs by avoiding expenses such as covering lost revenue from snarled services and ongoing fees for consultants aiding in data recovery. And, by rewarding hackers, it encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.
“The onus isn’t on the insurance company to stop the criminal, that’s not their mission. Their objective is to help you get back to business. But it does beg the question, when you pay out to these criminals, what happens in the future?” said Loretta Worters, spokeswoman for the Insurance Information Institute, a nonprofit industry group based in New York. Attackers “see the deep pockets. You’ve got the insurance industry that’s going to pay out, this is great.”
by Renee Dudley, ProPublica | Read more:
Image: Jack Taylor/Getty Images
Labels:
Business,
Cities,
Government,
Security,
Technology
Dinosaur Jr
Hey, look over your shoulder
Hey, it's me gettin' older
Always thought I
Should've told you
It's alright,
But it's sure gettin' colder I know you're
Over my shoulder I know now you'll get to hold her
You're gone (...)
[ed. Because it's great. See also: A 25-Year-Old Dinosaur Jr. Song Is a Hit in Japan. Nobody Knows Why. (Pitchfork)]
Lord Sundance
[ed. Repost. Sleazy weirdness from the 60s. For your pleasure. (Check out the music archives, there are over 1500 entries). ]
Subscribe to:
Comments (Atom)






