Tuesday, October 20, 2015
US States Move Quickly in VW Case
[ed. Caution: feeding frenzy. Having been involved in similar litigation, I can only imagine the legal manuevering going on behind the scenes. Greed is pretty ugly. But VW also comprises a major component of Germany's economy, so it'll be interesting to see what political influence is brought to bear on the situation going forward. The criminal case will be bad enough, the civil case could last for decades.]
With billions of dollars at stake in restitution and penalties, U.S. states are moving quickly to try to hold Volkswagen accountable for its emissions-cheating scandal.
Forty-five states and D.C. have joined a multistate investigation led by attorneys general, which is determining how VW was able to game emissions tests to hide that its "Clean Diesel" cars emitted smog-causing exhaust up to 40 times dirtier than the law allows. California and Texas are conducting their own investigations for now. At least one county, Harris County in Texas, also is going after Volkswagen with a lawsuit seeking more than $100 million.
The attorneys general are expected to seek compensation for consumers and redress for environmental harm, building their investigations under state laws that protect consumers from deceptive trade practices and set clean air standards.
"This is a really important case and it has big economic and health consequences. It's nowhere near the scale of tobacco but you are kind of in that realm," said former Wisconsin governor and attorney general Jim Doyle, who participated in the multistate investigation that ended with a landmark $200 billion, 25-year settlement with tobacco companies in 1998. "This is the kind of case that you elect an AG for, to stand up for the safety and health of the people of the state."
Volkswagen is "looking at an enormous settlement, just enormous, when you think about how many cars are out there," he said.
The case, in some respects, presents a slam dunk: Volkswagen has already admitted wrongdoing, affecting roughly a half million cars in the United States.
"This case makes me miss my AG days because there's such an opportunity to send a message, and the states can be at the forefront of sending a message," said Sen. Richard Blumenthal, D-Conn., former attorney general of his state.
Blumenthal said he was stunned by news that the world's largest carmaker had rigged its software to dupe emissions tests. "Astonishment bordering on disbelief that a company could be so absurdly arrogant and lawless that it would knowingly engage in this type of conduct," he said.
The multistate group formed unusually quickly given the company's admissions, but the investigation could last years. For comparison, a multi-state attorneys general investigation of ignition switch defects involving GM cars - a review that started shortly after GM announced a recall 20 months ago - remains active today. (...)
Volkswagen may want to deal first with any criminal charges before discussing any civil settlement, as the Justice Department investigates potential illegality by the company and its executives. The Environmental Protection Agency and Federal Trade Commission are also investigating.
"Until the criminal case clears, nobody is going to talk about civil. Volkswagen will not settle until the criminal investigations are resolved," said James E. Tierney, program director of the national state attorneys general program at Columbia Law School, and a former Maine attorney general.
With billions of dollars at stake in restitution and penalties, U.S. states are moving quickly to try to hold Volkswagen accountable for its emissions-cheating scandal.

The attorneys general are expected to seek compensation for consumers and redress for environmental harm, building their investigations under state laws that protect consumers from deceptive trade practices and set clean air standards.
"This is a really important case and it has big economic and health consequences. It's nowhere near the scale of tobacco but you are kind of in that realm," said former Wisconsin governor and attorney general Jim Doyle, who participated in the multistate investigation that ended with a landmark $200 billion, 25-year settlement with tobacco companies in 1998. "This is the kind of case that you elect an AG for, to stand up for the safety and health of the people of the state."
Volkswagen is "looking at an enormous settlement, just enormous, when you think about how many cars are out there," he said.
The case, in some respects, presents a slam dunk: Volkswagen has already admitted wrongdoing, affecting roughly a half million cars in the United States.
"This case makes me miss my AG days because there's such an opportunity to send a message, and the states can be at the forefront of sending a message," said Sen. Richard Blumenthal, D-Conn., former attorney general of his state.
Blumenthal said he was stunned by news that the world's largest carmaker had rigged its software to dupe emissions tests. "Astonishment bordering on disbelief that a company could be so absurdly arrogant and lawless that it would knowingly engage in this type of conduct," he said.
The multistate group formed unusually quickly given the company's admissions, but the investigation could last years. For comparison, a multi-state attorneys general investigation of ignition switch defects involving GM cars - a review that started shortly after GM announced a recall 20 months ago - remains active today. (...)
Volkswagen may want to deal first with any criminal charges before discussing any civil settlement, as the Justice Department investigates potential illegality by the company and its executives. The Environmental Protection Agency and Federal Trade Commission are also investigating.
"Until the criminal case clears, nobody is going to talk about civil. Volkswagen will not settle until the criminal investigations are resolved," said James E. Tierney, program director of the national state attorneys general program at Columbia Law School, and a former Maine attorney general.
by Ronnie Greene and Ryan J. Foley, AP | Read more:
Image: Luca BrunoA Canadian Votes From New York
[ed. Congratulations to Justin Trudeau. And good luck.]
The inevitability of moving to America, if you grow up in Canada, is a benevolent ultimatum: will you or won’t you? Will you stay in Canada, your home and native land, a country with the kind of social infrastructure that (in theory) respects the life and health of its citizen, that gives communities and their individual inhabitants (in theory) the rights and support necessary to live their lives as they please? In doing so, will you resign yourself to swirl in a drain of repetitive platitudes and ineffective yet unimpeachable traditions that never stops moving but seems, somehow, to never move forward?
Or will you move to America—a default term so often compromising only New York—to take advantage of the wide spaces and vast resources (in theory), the promise of unfettered financial opportunity and limitless professional acclaim (in theory)? In doing so, will you admit to callously abandoning your neighbours, your family, the very lifeline that provided the privileges necessary to even reach out and touch such a Northern Hemisphere-specific dream, without so much as a culturally obligatory apology?
Canada is a country constantly defined by opposition. Often (almost always) this opposing contrast comes from America, a neighbour close enough to cast a country-wide shadow. Canada, as seen from America, is an eerily similar counterpart, close enough for scrutiny but not far enough for perspective: either a nearby nirvana or a malevolent microcosm. The promise of our cheerfully praised globally recognized political characteristics, such as socialized healthcare or Drake, suggests a welcome respite from what are America’s less-favourable globally recognized characteristics—the cynicism, the capitalism, the crushing pursuit of no less than complete control.
One of the truest clichés about young, career-driven Canadians living in Toronto is that the “upwards” in “upwardly mobile” refers to the ascending ninety-minute flight to New York. There is, my peers and I tell ourselves, simply more in America: there are more schools, more people, more jobs, more money. There is, our friends back home remind us, simply nothing better in America: nothing secure, nothing guaranteed, nothing given. To leave one for the other is to address the unanswerable question at the root of choosing Canada or America: why leave? The response—why stay?—is maddeningly unsatisfying for both the asker and answerer. In any case, I left Toronto for New York six months ago.
Today is a federal election and my first time voting as an ex-pat. Canadians vote for candidates in their electoral district (called a “riding”), as per the regulations of Canada’s electoral system; there are currently twenty-three registered political parties candidates can be affiliated with, but the predominant parties to watch are the Conservatives, the Liberals, and the New Democratic Party (known as the NDP), as well as, to a slightly lesser extent, the Green Party and the Bloc Quebecois. Candidates who win a riding represent that district as a Member of Parliament (known as MPs), and the party with the most winning candidates becomes the ruling government and their leader the Prime Minister. The risk of splitting the vote is high, and real, particularly between the two left-leaning parties, the Liberals and the NDP. As voters, we can vote for the candidate we think would be best for our neighbourhoods, or we can vote for the candidate who belongs to the party we want to become the ruling government, or we can hope for a candidate who fulfills both those requirements. It is…confusing!
Recently my friend, Nicolae Rusan, told me he was helping to organize the #NoHarper event for Canadians living in New York: together with some of his friends from McGill University—often referred to as “Canada’s Harvard”—they were fundraising for an independent advocacy group called Leadnow currently running a campaign to ultimately defeat the sitting Conservative government and Stephen Harper—often referred to as “Canada’s Richard Nixon”—by educating people to vote strategically in the ridings with the most contentious campaigns for MPs.
His former classmate, Marie-Marguerite Sabongui, sits on the board of Leadnow. She had the idea for the event while drinking beers at Ontario Bar in Williamsburg (it is an Ontario-themed bar). They were, according to Sabongui, “disenfranchised and angry” about the recent Ontario Court of Appeals decision to uphold a rule that prevents Canadians from living abroad for more than five years to vote. Originally put into place in 1993, most Canadians retained their right to vote simply by visiting the country every five years—even a connecting flight through a Canadian airport counted as a visit—until 2007, when the ruling began to be strictly enforced in the most literal terms: if you didn’t have an address on Canadian soil, you could not vote. A 2014 lawsuit restored the original interpretation, but this was overturned in June 2015.
As a direct result, approximately 1.4 million Canadian citizens are not eligible to vote in the current federal election. As an indirect result, the comparison between Harper and Nixon has become particularly apt. Harper’s party has been directly implicated in the push to disenfranchise as many voters as possible, alongside a multitude of other sins. In The Guardian, Nick Davies recently outlined some of the most recent and egregious offences:
In that same Guardian article, Davies mentioned one of my favorite anecdotes about Harper: while his brothers became accountants, he pursued a pre-political career as an economist, claiming he did not have the personality to become an accountant. This aligns with the Stephen Harper public persona I know best: the man I’ve seen represent my country for the last nine years is known for his dullness, his dryness, his perceived disdain for other people. It is hard to reconcile the idea of a man who felt himself antisocial enough to choose the kind of career that would allow him to work in solitude, yet wants to represent millions of Canadians to the world; the kind of man who self-identifies as most comfortable when unobserved, yet also decorates his office with self-portraits. And yet this is an equation that continues to add up in Harper’s favour, as befits an economist who knows his calculations. Harper has won three elections. Despite the best efforts of multiple parties, activists, and lobbyists to convince Canadians to vote otherwise, multiple polls leading up to the election were consistently too close to accurately predict which party will pull ahead and with what percentage of votes, suggesting that, at the very least, Canadians were not ready to declare their firm opposition to Harper, but had not settled on their other, singular option. This is, I think, a symptom of the confusing messages dispersed across our wide and disparate land mass: vote, but vote with caution. As a result, Harper’s version of divide-and-conquer becomes not so ominous as it is elementary: for him, Canada will be a long division equation to solve, and he will keep breaking down the numbers until there are simply no remainders left.
The inevitability of moving to America, if you grow up in Canada, is a benevolent ultimatum: will you or won’t you? Will you stay in Canada, your home and native land, a country with the kind of social infrastructure that (in theory) respects the life and health of its citizen, that gives communities and their individual inhabitants (in theory) the rights and support necessary to live their lives as they please? In doing so, will you resign yourself to swirl in a drain of repetitive platitudes and ineffective yet unimpeachable traditions that never stops moving but seems, somehow, to never move forward?

Canada is a country constantly defined by opposition. Often (almost always) this opposing contrast comes from America, a neighbour close enough to cast a country-wide shadow. Canada, as seen from America, is an eerily similar counterpart, close enough for scrutiny but not far enough for perspective: either a nearby nirvana or a malevolent microcosm. The promise of our cheerfully praised globally recognized political characteristics, such as socialized healthcare or Drake, suggests a welcome respite from what are America’s less-favourable globally recognized characteristics—the cynicism, the capitalism, the crushing pursuit of no less than complete control.
One of the truest clichés about young, career-driven Canadians living in Toronto is that the “upwards” in “upwardly mobile” refers to the ascending ninety-minute flight to New York. There is, my peers and I tell ourselves, simply more in America: there are more schools, more people, more jobs, more money. There is, our friends back home remind us, simply nothing better in America: nothing secure, nothing guaranteed, nothing given. To leave one for the other is to address the unanswerable question at the root of choosing Canada or America: why leave? The response—why stay?—is maddeningly unsatisfying for both the asker and answerer. In any case, I left Toronto for New York six months ago.
Today is a federal election and my first time voting as an ex-pat. Canadians vote for candidates in their electoral district (called a “riding”), as per the regulations of Canada’s electoral system; there are currently twenty-three registered political parties candidates can be affiliated with, but the predominant parties to watch are the Conservatives, the Liberals, and the New Democratic Party (known as the NDP), as well as, to a slightly lesser extent, the Green Party and the Bloc Quebecois. Candidates who win a riding represent that district as a Member of Parliament (known as MPs), and the party with the most winning candidates becomes the ruling government and their leader the Prime Minister. The risk of splitting the vote is high, and real, particularly between the two left-leaning parties, the Liberals and the NDP. As voters, we can vote for the candidate we think would be best for our neighbourhoods, or we can vote for the candidate who belongs to the party we want to become the ruling government, or we can hope for a candidate who fulfills both those requirements. It is…confusing!
Recently my friend, Nicolae Rusan, told me he was helping to organize the #NoHarper event for Canadians living in New York: together with some of his friends from McGill University—often referred to as “Canada’s Harvard”—they were fundraising for an independent advocacy group called Leadnow currently running a campaign to ultimately defeat the sitting Conservative government and Stephen Harper—often referred to as “Canada’s Richard Nixon”—by educating people to vote strategically in the ridings with the most contentious campaigns for MPs.
His former classmate, Marie-Marguerite Sabongui, sits on the board of Leadnow. She had the idea for the event while drinking beers at Ontario Bar in Williamsburg (it is an Ontario-themed bar). They were, according to Sabongui, “disenfranchised and angry” about the recent Ontario Court of Appeals decision to uphold a rule that prevents Canadians from living abroad for more than five years to vote. Originally put into place in 1993, most Canadians retained their right to vote simply by visiting the country every five years—even a connecting flight through a Canadian airport counted as a visit—until 2007, when the ruling began to be strictly enforced in the most literal terms: if you didn’t have an address on Canadian soil, you could not vote. A 2014 lawsuit restored the original interpretation, but this was overturned in June 2015.
As a direct result, approximately 1.4 million Canadian citizens are not eligible to vote in the current federal election. As an indirect result, the comparison between Harper and Nixon has become particularly apt. Harper’s party has been directly implicated in the push to disenfranchise as many voters as possible, alongside a multitude of other sins. In The Guardian, Nick Davies recently outlined some of the most recent and egregious offences:
In the 11 years since he became the leader of the country’s Conservatives, the party has been fined for breaking electoral rules and various members of Team Harper have been caught misleading parliament, gagging civil servants, subverting parliamentary committees, gagging scientists, harassing the Supreme Court, gagging diplomats, lying to the public, concealing evidence of potential crime, spying on opponents, bullying and smearing. (...)Last year, in an essay published on n+1, Marianne Lenabat quoted Harper in 2006: “You won’t recognize Canada when I’m through with it.” His point, punctuated for dramatic flourish, is often cited to underscore the one-sided mirror of a relationship between Harper and his constituents. There is an “I” in Harper and a “you” in us. But this statement suggests that Harper was taking for granted Canada already had an identity that is both easily recognizable and internally accepted, a firm identity with leftist leanings that he was reorienting towards a more austere path, a face only a fellow Canadian could love. I am not sure that that identity ever completely existed or if it is, like other forms of nostalgic reference, a past fantasy used to foster present frustrations: yes, we’re supposed to think in the simplest terms possible, clean up this place, which is so messy I don’t even recognize it anymore! (...)
In that same Guardian article, Davies mentioned one of my favorite anecdotes about Harper: while his brothers became accountants, he pursued a pre-political career as an economist, claiming he did not have the personality to become an accountant. This aligns with the Stephen Harper public persona I know best: the man I’ve seen represent my country for the last nine years is known for his dullness, his dryness, his perceived disdain for other people. It is hard to reconcile the idea of a man who felt himself antisocial enough to choose the kind of career that would allow him to work in solitude, yet wants to represent millions of Canadians to the world; the kind of man who self-identifies as most comfortable when unobserved, yet also decorates his office with self-portraits. And yet this is an equation that continues to add up in Harper’s favour, as befits an economist who knows his calculations. Harper has won three elections. Despite the best efforts of multiple parties, activists, and lobbyists to convince Canadians to vote otherwise, multiple polls leading up to the election were consistently too close to accurately predict which party will pull ahead and with what percentage of votes, suggesting that, at the very least, Canadians were not ready to declare their firm opposition to Harper, but had not settled on their other, singular option. This is, I think, a symptom of the confusing messages dispersed across our wide and disparate land mass: vote, but vote with caution. As a result, Harper’s version of divide-and-conquer becomes not so ominous as it is elementary: for him, Canada will be a long division equation to solve, and he will keep breaking down the numbers until there are simply no remainders left.
by Haley Mlotek, The Hairpin | Read more:
Image: uncredited
YouTube’s ‘My Daily Routine’ Is a Beautiful Lie
[ed. No idea. I knew about ASMR and haul and unboxing videos but this is a new one to me.]
A YouTuber’s morning is better than yours. While you’re still hitting the snooze button, they’ve made a healthy breakfast, put together the perfect outfit for the day, walked their dog, and tweeted a flawless selfie to hundreds of thousands of fans. I know this because I’ve seen it, in a “My Morning Routine” video.
The “My Morning Routine” video plays like this: Our heroine—routine videos are almost invariably shot by female YouTubers—wakes early to the sound of an iPhone alarm, or a small adorable dog arrives and licks her face. She narrates the motions in a voiceover: She gets out of bed to let the dog out, then she puts on a pot of coffee and prepares a pious breakfast invariably including chia seeds. She washes her suspiciously already-perfect face and applies makeup. She smiles, scrolling through alerts on her phone, pausing to snap a selfie, then leaves for the gym, or college, or work. (The “night routine” is its inverse: The subject changes back into pajamas, chamomile tea instead of coffee in hand. We see her tucked up in bed clutching her phone, still perfectly made up but yawning, before the lights go out.)
YouTube is full of “routines”: “My Morning Routine,” “My Night Routine,” “My Routine for School,” or “Morning Routine: Fall Edition,” with “My Daily Routine” being the most generic, filmed almost exclusively by teenage girls and women in their 20s in the U.S. and the U.K. Over 500,000 results surface for the query “My Daily Routine.”
Some feature product placements or a Cribs-style fridge tour. Some contain knowing shots of their subjects in the shower, framed to cut off anything below the shoulders. Others play like soporific instructionals for life, narrated in a benign yet blathering style common to ASMR videos. We watch our heroine walk into the bathroom, then into the kitchen to make coffee and oatmeal, informing us at every step of what she’s doing in pedantic detail:
“I head over to my Keurig, and while that’s heating up I make breakfast… While that’s cooking I’m going to get my coffee ready. I’m going to go in our cupboard and pick out a mug, which we’re lacking as we just did dishes. I’m going to put in a K-Cup. I love hazelnut coffee…”
The routine video presents itself as self-expression, a way to get to know its maker and her individual quirks. Yet almost every routine is the same, telling us more about the culture they exist in than about an individual subject. They repeat a series of Stepford-esque domestic tropes, a retrograde vision of online femininity.
Yet there’s something undeniably comforting to the daily routine video: It plays like Pinterest in motion. Slights like “basic bitch” carry no weight here, because what’s basic is rendered aspirational. Simultaneously, each example brings you closer to the YouTuber who makes it: Beforehand, they’ll do “homeware hauls” and make their world beyond the initial video setting camera-ready. Then they’ll progress to routine videos, as if to declare that they live, now, on the Internet rather than IRL. Their life will become more and more “managed” even as the access granted increases. Finally, every day will be a good day.
The job of YouTubers is to perform a more down-to-earth role than mainstream celebrities (or their cousins, reality TV stars.) The routine video sees their box-shaped world expanded: It raises and addresses the question of what a YouTuber does all day beyond their videos. The most honest clips feature their subject slouched for long periods in front of a computer screen, getting up at some point to reassure us that they see sunlight and go to the gym. Those moments of on-screen screens are the most interesting; there’s something eerie and obviously fake about the YouTuber who portrays herself cheerfully reading comments, rather than censoring and weeding out the inevitable abusive ones.
But “routine” videos can be so twee as to be insufferable. The subject has reached such a point of satisfaction, comfort, and security that she can commit her “routine” to video. Even if she still lives at home with her parents, here she rewrites her life as independent. The video acts as an exercise in curated perfectionism, a demonstration that its creator has her shit together. They offer a 360-degree vision of competitive normality, auditioning their star as a trainee housewife.
This vision of life is also a commercial one. Given the advent of “beauty gurus” schooled in PR and sponsored by companies, the daily routine has become a parody of real life where every moment is opportunity for product placement, paid or unpaid. Starbucks, Netflix, and iPhones feature as standard items. Other placements include face wash, toothpaste, makeup, and iPhone apps. Life is dismantled into a series of products and processes.
by Roisin Kiberd, The Kernal | Read more:
Image: J. Longo and YouTube
Monday, October 19, 2015
The Master Algorithm
Computers and the algorithms they run are precise, perfect, meticulously programmed, and austere. That’s the idea, anyway. But there’s a burgeoning, alternative model of programming and computation that sidesteps the limitations of the classic model, embracing uncertainty, variability, self-correction, and overall messiness. It’s called machine learning, and it’s impacted fields as diverse as facial recognition, movie recommendations, real-time trading, and cancer research—as well as all manner of zany experiments, like Google’s image-warping Deep Dream. Yet even within computer science, machine learning is notably opaque. In his new book The Master Algorithm, Pedro Domingos covers the growing prominence of machine learning in close but accessible detail. Domingos’ book is a nontechnical introduction to the subject, but even if it still seems daunting, it’s important to understand how machine learning works, the many forms it can take, and how it’s taking on problems that give traditional computing a great deal of trouble. Machine learning won’t bring us a utopian singularity or a dystopian Skynet, but it will inform an increasing amount of technology in the decades to come.
While machine learning originated as a subfield of artificial intelligence—the area of computer science dedicated to creating humanlike intelligence in computers—it’s expanded beyond the boundaries of A.I. into data science and expert systems. But machine learning is fundamentally different from much of what we think of as programming. When we think of a computer program (or the algorithm a program implements), we generally think of a human engineer giving a set of instructions to a computer, telling it how to handle certain inputs that will generate certain outputs. The state maintained by the program changes over time—a Web browser keeps track of which pages it’s displaying and responds to user input by (ideally) reacting in a determinate and predictable fashion—but the logic of the program is essentially described by the code written by the human. Machine learning, in many of its forms, is about building programs that themselves build programs. But these machine-generated programs—neural networks, Bayesian belief networks, evolutionary algorithms—are nothing like human-generated algorithms. Instead of being programmed, they are “trained” by their designers through an iterative process of providing positive and negative feedback on the results they give. They are difficult (sometimes impossible) to understand, tricky to debug, and harder to control. Yet it is precisely for these reasons that they offer the potential for far more “intelligent” behavior than traditional approaches to algorithms and A.I.
Domingos’ book is, to the best of my knowledge, the first general history of the machine-learning field. He covers the alternate paradigms that gave rise to machine learning in the middle of the 20thcentury, the “connectionism” field’s fall from grace in the 1960s, and its eventual resurrection and surpassing of traditional A.I. paradigms in the 1980s to the present. Domingos divides the field into five contemporary machine-learning paradigms—evolutionary algorithms, connectionism and neural networks, symbolism, Bayes networks, and analogical reasoning—which he imagines being unified in one future “master algorithm” capable of learning nearly anything.
None of these five paradigms admits to easy explanation. Take, for example, the neural networks that gave us Google’s Deep Dream—that thing you’ve seen that turns everyday photos into horrorscapes of eyeballs and puppy heads—as well as a surprisingly successful image-recognition algorithm that learned to recognize cat faces without any supervision. They consist of math-intensive calculations of thousands of interacting “neurons,” each with conditioned weights that determine when they “fire” outputs to other connected neurons. As these networks receive data on whether they are generating good results or not, they update their weights with the goal of improving the results. The feedback may not immediately fix a wrong answer. With enough wrong answers, though, the weights will change to skew away from that range of wrong answers. It’s this sort of indirect, probabilistic control that characterizes much of machine learning.
The question, then, is why one would want to generate opaque and unpredictable networks rather than writing strict, effective programs oneself. The answer, as Domingos told me, is that “complete control over the details of the algorithm doesn’t scale.” There are three related aspects to machine learning that mitigate this problem:
(1) It uses probabilities rather than the true/false binary.
(2) Humans accept a loss of control and precision over the details of the algorithm.
(3) The algorithm is refined and modified through a feedback process.
These three factors make for a significant change from the traditional programming paradigm (the one which I myself inhabited as a software engineer). Machine learning creates systems that are less under our direct control but that—ideally—can respond to their own mistakes and update their internal states to improve gradually over time. Again, this is a contrast to traditional programming, where bugs are things to be ferreted out before release. Machine-learning algorithms are very rarely perfect and have succeeded best in cases where there’s a high degree of fault tolerance, such as search results or movie recommendations. Even if 20 percent of the results are noise, the result is still more “intelligent” than one might expect.
by David Auerbach, Slate | Read more:
Image: Lisa Larson-Walker. Images by Ultima_Gaina/Thinkstock and Akritasa/Wikimedia Commons
While machine learning originated as a subfield of artificial intelligence—the area of computer science dedicated to creating humanlike intelligence in computers—it’s expanded beyond the boundaries of A.I. into data science and expert systems. But machine learning is fundamentally different from much of what we think of as programming. When we think of a computer program (or the algorithm a program implements), we generally think of a human engineer giving a set of instructions to a computer, telling it how to handle certain inputs that will generate certain outputs. The state maintained by the program changes over time—a Web browser keeps track of which pages it’s displaying and responds to user input by (ideally) reacting in a determinate and predictable fashion—but the logic of the program is essentially described by the code written by the human. Machine learning, in many of its forms, is about building programs that themselves build programs. But these machine-generated programs—neural networks, Bayesian belief networks, evolutionary algorithms—are nothing like human-generated algorithms. Instead of being programmed, they are “trained” by their designers through an iterative process of providing positive and negative feedback on the results they give. They are difficult (sometimes impossible) to understand, tricky to debug, and harder to control. Yet it is precisely for these reasons that they offer the potential for far more “intelligent” behavior than traditional approaches to algorithms and A.I.
Domingos’ book is, to the best of my knowledge, the first general history of the machine-learning field. He covers the alternate paradigms that gave rise to machine learning in the middle of the 20thcentury, the “connectionism” field’s fall from grace in the 1960s, and its eventual resurrection and surpassing of traditional A.I. paradigms in the 1980s to the present. Domingos divides the field into five contemporary machine-learning paradigms—evolutionary algorithms, connectionism and neural networks, symbolism, Bayes networks, and analogical reasoning—which he imagines being unified in one future “master algorithm” capable of learning nearly anything.
None of these five paradigms admits to easy explanation. Take, for example, the neural networks that gave us Google’s Deep Dream—that thing you’ve seen that turns everyday photos into horrorscapes of eyeballs and puppy heads—as well as a surprisingly successful image-recognition algorithm that learned to recognize cat faces without any supervision. They consist of math-intensive calculations of thousands of interacting “neurons,” each with conditioned weights that determine when they “fire” outputs to other connected neurons. As these networks receive data on whether they are generating good results or not, they update their weights with the goal of improving the results. The feedback may not immediately fix a wrong answer. With enough wrong answers, though, the weights will change to skew away from that range of wrong answers. It’s this sort of indirect, probabilistic control that characterizes much of machine learning.
The question, then, is why one would want to generate opaque and unpredictable networks rather than writing strict, effective programs oneself. The answer, as Domingos told me, is that “complete control over the details of the algorithm doesn’t scale.” There are three related aspects to machine learning that mitigate this problem:
(1) It uses probabilities rather than the true/false binary.
(2) Humans accept a loss of control and precision over the details of the algorithm.
(3) The algorithm is refined and modified through a feedback process.
These three factors make for a significant change from the traditional programming paradigm (the one which I myself inhabited as a software engineer). Machine learning creates systems that are less under our direct control but that—ideally—can respond to their own mistakes and update their internal states to improve gradually over time. Again, this is a contrast to traditional programming, where bugs are things to be ferreted out before release. Machine-learning algorithms are very rarely perfect and have succeeded best in cases where there’s a high degree of fault tolerance, such as search results or movie recommendations. Even if 20 percent of the results are noise, the result is still more “intelligent” than one might expect.
by David Auerbach, Slate | Read more:
Image: Lisa Larson-Walker. Images by Ultima_Gaina/Thinkstock and Akritasa/Wikimedia Commons
Sunday, October 18, 2015
A Little More Endangered
[ed. This reminds me of Hanya Yanagihara's novel The People in the Trees.]
For Christopher Filardi of the American Museum of Natural History, there is nothing like the thrill of finding a mysterious species. Such animals live at the intersection of myth and biology – tantalising researchers with the prospect that they may be real, but eluding trustworthy documentation and closer study. Indeed, last month, Filardi waxed poetic on the hunt for the invisible beasts that none the less walk among us.
“We search for them in earnest but they are seemingly beyond detection except by proxy and story,” he wrote. “They are ghosts, until they reveal themselves in a thrilling moment of clarity and then they are gone again. Maybe for another day, maybe a year, maybe a century.”
Filardi was moved because, scouring what he called “the remote highlands” of Guadalcanal in the Solomon Islands, he had found a bird he had searched more than two decades for: the moustached kingfisher.
“Described by a single female specimen in the 1920s, two more females brought to collectors by local hunters in the early 1950s, and only glimpsed in the wild once,” he wrote. “Scientists have never observed a male. Its voice and habits are poorly known. Given its history of eluding detection, realistic hopes of finding the bird were slim.”
Yet, defying the odds, Filardi did just that.
After setting mist nets across the forest, he and his team secured a male specimen with a “magnificent all-blue back” and a bright orange face. The discovery brought quite the declaration – “Oh my god, the kingfisher” – and led Filardi to liken it to “a creature of myth come to life”. And then, Filardi killed it – or, in the parlance of scientists, “collected” it.
This wasn’t trophy hunting – but outrage ensued.
“Of course, ‘collect’ means killed, a lame attempt to sanitise the totally unnecessary killing of this remarkable sentient being,” Marc Bekoff, professor emeritus of ecology and evolutionary biology at the University of Colorado, wrote in the Huffington Post. “When will the killing of other animals stop? We need to give this question serious consideration because far too much research and conservation biology is far too bloody and does not need to be.”
The controversy led Audubon – which had previously published a piece innocently titled Moustached Kingfisher Photographed for First Time – to add quite the editor’s note.
For Christopher Filardi of the American Museum of Natural History, there is nothing like the thrill of finding a mysterious species. Such animals live at the intersection of myth and biology – tantalising researchers with the prospect that they may be real, but eluding trustworthy documentation and closer study. Indeed, last month, Filardi waxed poetic on the hunt for the invisible beasts that none the less walk among us.

Filardi was moved because, scouring what he called “the remote highlands” of Guadalcanal in the Solomon Islands, he had found a bird he had searched more than two decades for: the moustached kingfisher.
“Described by a single female specimen in the 1920s, two more females brought to collectors by local hunters in the early 1950s, and only glimpsed in the wild once,” he wrote. “Scientists have never observed a male. Its voice and habits are poorly known. Given its history of eluding detection, realistic hopes of finding the bird were slim.”
Yet, defying the odds, Filardi did just that.
After setting mist nets across the forest, he and his team secured a male specimen with a “magnificent all-blue back” and a bright orange face. The discovery brought quite the declaration – “Oh my god, the kingfisher” – and led Filardi to liken it to “a creature of myth come to life”. And then, Filardi killed it – or, in the parlance of scientists, “collected” it.
This wasn’t trophy hunting – but outrage ensued.
“Of course, ‘collect’ means killed, a lame attempt to sanitise the totally unnecessary killing of this remarkable sentient being,” Marc Bekoff, professor emeritus of ecology and evolutionary biology at the University of Colorado, wrote in the Huffington Post. “When will the killing of other animals stop? We need to give this question serious consideration because far too much research and conservation biology is far too bloody and does not need to be.”
The controversy led Audubon – which had previously published a piece innocently titled Moustached Kingfisher Photographed for First Time – to add quite the editor’s note.
“This story has been updated to clarify that the bird was euthanised and the specimen collected,” Audubon wrote. A researcher on Filardi’s team, it added, “told Audubon that they assessed the state of the population and the state of the habitat, and concluded it was substantial and healthy enough that taking the specimen – the only male ever observed by science – would not affect the population’s success”.Still, to some, finding something only to kill it just seemed twisted.
“These were, indeed, the first-ever photos of the male moustached kingfisher alive,” wrote blogger Chris Matyszczyk at CNET. “It didn’t live much longer.”
Filardi was also compelled to write an op-ed for Audubon: Why I Collected a Moustached Kingfisher.
“I have spent time in remote, and not so remote, forests of the Solomon Islands across nearly 20 years,” he wrote. “I have watched whole populations of birds decline and disappear in the wake of poorly managed logging operations and, more recently mining. On this trip, the real discovery was not finding an individual Moustached Kingfisher, but discovering that the world this species inhabits is still thriving in a rich and timeless way.”
Filardi stressed that, among Guadalcanal locals, the bird is known to be “unremarkably common”. He explained how he and his team made the decision – “neither an easy decision nor one made in the spur of the moment” – to collect the bird with reference to “standard practice for field biologists”. And he said that killing one kingfisher might help save them all.
“I have come to know, through firsthand experience, how specimens and other artefacts in museums can over time become sacred,” he wrote. “I have watched sparks ignite in the eyes of Pacific Islanders holding specimens of extinct species doomed by habitat loss, invasive species or disease. I have watched my friends, my colleagues – those I work both for and with – go home and out into the world and make a difference. These moments drive my work. Through a vision shared with my Solomon Island mentors ... the Moustached Kingfisher I collected is a symbol of hope and a purveyor of possibility, not a record of loss.”
But was he right? (...)
Wildlife experts have been debating that question for more than 100 years – ever since they first noticed that the colourful and charismatic species they wanted to document had begun to vanish. The pro-collection camp says that the practice requires the death of only a few individuals and may provide knowledge that helps to ensure the survival of the overall species. The “voucher specimen” – a representative specimen used for studies – is considered the gold standard for documenting a species’ presence: it’s the most definitive way to confirm that an animal exists and serves as the basis for all kinds of research on its health and habitat.
But opponents point out that history is littered with the stuffed and mounted carcasses of animals that were the last of their kind, bagged by overzealous collectors who didn’t stop to consider the cost of the kill.
In collecting’s heyday, bagging a rare species was a point of pride for naturalists, and wealthy wildlife lovers amassed taxidermied animals the way another person might accumulate art. Famous scientists like Charles Darwin and Alfred Russel Wallace collected and preserved hundreds, thousands, even tens of thousands of specimens – most of which served a vital role in making new species known to science. But collectors, who travelled to the world’s most remote regions in search of as-yet-unknown animals, also had an Indiana Jones-like swagger.
Competition to find something first was fierce, and institutions vying for new and exotic specimens meant that dozens of researchers would go tramping up mountains and into jungles to kill the same animal.
Among the most famous victims of this is the great auk, a now-extinct North Atlantic bird with a penguin’s tuxedo-like plumage and ungainly waddle (but not much of its DNA – auks are only distantly related to their Southern Hemisphere cousins).
The species was already teetering on the brink when naturalists and museums took an interest in it in the 19th century. Climate change during the northern hemisphere’s several-century cool spell known as the “little ice age” had decimated the population. Humans then finished the job. The birds stood nearly a metre tall and sported thick, plumage, making them a valuable food source and even more valuable commercial product. And its clumsiness on land (and inability to fly) made it an easy target for hunters.
Paradoxically, it was the great auk’s sudden rarity that made scientists so eager to kill them. According to the Smithsonian, the great auk’s classification as endangered in 1775 led to increased demand for specimens – a single bird could be sold for $16 in the early 1800s, a full year’s wages. No longer hunted for its meat and down, the great auk and its eggs became a target for their scientific value. In 1844, a group of fishermen caught two of the birds on a remote island off the Icelandic coast. They were sold to a chemist in Reykjavik, who stuffed and mounted the birds, then preserved their eyes and internal organs like pickles in jars of alcohol. No one on record has seen one of the huge, black-and-white birds since.
by Sarah Kaplan and Justin Wm. Moyer, Washington Post | Read more:
Image: Rob Moyle
The Ultimate Guide to Buying a Leather Jacket
Ah, Fall is here, or what I like to call “Leather Weather”.
I first fell in love with leather jackets working with Robert Geller, where he walked me through a new leather jacket straight from the factory in Japan on my very first day.
Since then, I created my own leather jacket line and amassed more leather jackets than any one guy should honestly have at one time.
A proper, staple leather jacket will not only last you forever, it’s timeless and extremely versatile, a no brainer when it comes to building your lean wardrobe.
Outside of the suit, a leather jacket will be one of the biggest investments a guy will make in his wardrobe. Just like a suit, there’s something transformative about putting on a properly fitted leather jacket.
There’s no other way to describe it: You feel like a badass. (...)
Keep in mind, these are rules of thumb and not set in stone, simply what I’ve observed as a designer and as a shopper. It will give you a realistic idea of what to expect when you go jacket hunting.
Personally, I would be VERY cautious of jackets under $500 (truthfully, even $500 is pushing it unless we’re talking used jackets – more on that later). I’ll give you some recommendations of jackets in prices later, but let’s get into the illustrated showdown:
Leather
The biggest factor in the price of the jacket? The quality of the leather.
Cheaper jackets will use leather that is corrected. Animals that have a lot of scarring, branding or knicks from how they are raised. These skins will be sanded down and sometimes faux leather grains will be pressed into it, as well as extra spraying of dyes and treatments to make them more uniform.
Because of these top coatings, corrected leathers will have an overly smooth, plastic feel, versus the soft, oily, uneven textured nature of uncorrected skins.
by Peter Nguyen, Effortlessgent | Read more:
Image: Indiana Jones

Since then, I created my own leather jacket line and amassed more leather jackets than any one guy should honestly have at one time.
A proper, staple leather jacket will not only last you forever, it’s timeless and extremely versatile, a no brainer when it comes to building your lean wardrobe.
Outside of the suit, a leather jacket will be one of the biggest investments a guy will make in his wardrobe. Just like a suit, there’s something transformative about putting on a properly fitted leather jacket.
There’s no other way to describe it: You feel like a badass. (...)
Keep in mind, these are rules of thumb and not set in stone, simply what I’ve observed as a designer and as a shopper. It will give you a realistic idea of what to expect when you go jacket hunting.
Personally, I would be VERY cautious of jackets under $500 (truthfully, even $500 is pushing it unless we’re talking used jackets – more on that later). I’ll give you some recommendations of jackets in prices later, but let’s get into the illustrated showdown:
Leather
The biggest factor in the price of the jacket? The quality of the leather.
Cheaper jackets will use leather that is corrected. Animals that have a lot of scarring, branding or knicks from how they are raised. These skins will be sanded down and sometimes faux leather grains will be pressed into it, as well as extra spraying of dyes and treatments to make them more uniform.
Because of these top coatings, corrected leathers will have an overly smooth, plastic feel, versus the soft, oily, uneven textured nature of uncorrected skins.
by Peter Nguyen, Effortlessgent | Read more:
Image: Indiana Jones
Saturday, October 17, 2015
William Basinski
[ed. One could make the argument that William Basinski is more famous for destroying his art than creating it - a form of Process Art. (Felix Gonzales-Torres and his Untitled series using piles of candies is another form of process art). Personally, Disintigration Loops I-IV seems kind of monotonous to me, but art can be that too, right?]
You are slowly being destroyed. It's imperceptible in the scheme of a day or a week or even a year, but you are aging, and your body is degrading. As your cells synthesize the very proteins that allow you to live, they also release free radicals, oxidants that literally perforate your tissue and cause you to grow progressively less able to perform as you did at your peak. By the time you reach 80, you will literally be full of holes, and though you'll never notice a single one of them, you will inevitably feel their collective effect. Aging and degradation are forces of nature, functions of living, and understanding them can be as terrifying as it is gratifying.
It's not the kind of thing you can say often, but I think William Basinski's Disintegration Loops are a step toward that understanding-- the music itself is not so much composed as it is this force of nature, this inevitable decay of all things, from memory to physical matter, made manifest in music. During the summer of 2001, Basinski set about transferring a series of 20-year-old tape loops he'd had in storage to a digital file format, and was startled when this act of preservation began to devour the tapes he was saving. As they played, flakes of magnetic material were scraped away by the reader head, wiping out portions of the music and changing the character and sound of the loops as they progressed, the recording process playing an inadvertent witness to the destruction of Basinski's old music.
by Joe Tangari, Pitchfork | Read more:
Images: from the Internet Archive ("Stanley Zoobris Home Movies" and " Decomposed Carnival")Is the World Real?
Is this real life? How do we know that we are not hallucinating it all? What if we're plugged into a Matrix-style virtual reality simulator? Isn't the universe a giant hologram anyway? Is reality really real? What is reality?
We asked renowned neuroscientists, physicists, psychologists, technology theorists and hallucinogen researchers if we can ever tell whether the "reality" we are experiencing is "real" or not. Don't worry. You're going to be ok.
Jessica L. Nielson, Ph.D., Department of Neurosurgery, Postdoctoral Scholar, University of California, San Francisco (UCSF), Brain and Spinal Injury Center (BASIC)
What is our metric for determining what is real? That is probably different for each person. One could try and find a consensus state that most people would agree is "real" or a "hallucination" but from the recent literature using imaging techniques in people who are having a hallucinatory experience on psychedelics, it seems the brain is hyper-connected and perhaps just letting in more of the perceivable spectrum of reality.
When it comes to psychosis, things like auditory hallucinations can seem very real. Ultimately, our experiences are an interpretation of a set of electrical signals in our brains. We do the best to condense all those signals into what we perceive to be the world around us (and within us), but who is to say that the auditory hallucinations that schizophrenics experience, or the amazing visual landscapes seen on psychedelics are not some kind of bleed through between different forms of reality? I don't think there is enough data to either confirm or deny whether what those people are experiencing is "real" or not.
Sean Carroll, Cosmologist and Physics professor specializing in dark energy and general relativity, research professor in the Department of Physics at the California Institute of Technology
How do we know this is real life? The short answer is: we don't. We can never prove that we're not all hallucinating, or simply living in a computer simulation. But that doesn't mean that we believe that we are.
There are two aspects to the question. The first is, "How do we know that the stuff we see around us is the real stuff of which the universe is made?" That's the worry about the holographic principle, for example -- maybe the three-dimensional space we seem to live in is actually a projection of some underlying two-dimensional reality.
The answer to that is that the world we see with our senses is certainly not the "fundamental" world, whatever that is. In quantum mechanics, for example, we describe the world using wave functions, not objects and forces and spacetime. The world we see emerges out of some underlying description that might look completely different.
The good news is: that's okay. It doesn't mean that the world we see is an "illusion," any more than the air around us becomes an illusion when we first realize that it's made of atoms and molecules. Just because there is an underlying reality doesn't disqualify the immediate reality from being "real." In that sense, it just doesn't matter whether the world is, for example, a hologram; our evident world is still just as real.
The other aspect is, "How do we know we're not being completely fooled?" In other words, forgetting about whether there is a deeper level of reality, how do we know whether the world we see represents reality at all? How do we know, for example, that our memories of the past are accurate? Maybe we are just brains living in vats, or maybe the whole universe was created last Thursday.
We can never rule out such scenarios on the basis of experimental science. They are conceivably true! But so what? Believing in them doesn't help us understand any features of our universe, and puts us in a position where we have no right to rely on anything that we did think is true. There is, in short, no actual evidence for any of these hyper-skeptical scenarios. In that case, there's not too much reason to worry about them.
The smart thing to do is to take reality as basically real, and work hard to develop the best scientific theories we can muster in order to describe it. (...)
George Musser Jr., Contributing editor for Scientific American magazine, Knight Science Journalism Fellow at MIT 2014–2015
The holographic principle doesn’t mean the universe isn't real. It just means that the universe around us, existing within spacetime, is CONSTRUCTED out of more fundamental building blocks. "Real" is sometimes taken to mean "fundamental", but that's a very limited sense of the term. Life isn't fundamental, since living things are made from particles, but that doesn’t make it any less real. It’s a higher-level phenomenon. So is spacetime, if the holographic principle is right. I talk about the holographic principle at length in my book, and I discuss the distinction between fundamental and higher-level phenomena in a recent blog post.
The closest we come in science to "real" or "objective" is intersubjective agreement. If a large number of people agree that something is real, we can assume that it is. In physics, we say that something is an objective feature of nature if all observers will agree on it - in other words, if that thing doesn’t depend on our arbitrary labels or the vagaries of a given vantage point ("frame-independent" or "gauge-invariant", in the jargon). For instance, I'm not entitled to say that my kitchen has a left side and a right side, since the labels "left" and "right" depend on my vantage point; they are words that describe me more than the kitchen. This kind of reasoning is the heart of Einstein's theory of relativity and the theories it inspired.
Could we all be fooled? Yes, of course. But there's a practical argument for taking intersubjective agreement as the basis of reality. Even if everyone is being fooled, we still need to explain our impressions. An illusion, after all, is entirely real - it is the INTERPRETATION of the illusion that can lead us astray. If I see a smooth blue patch in the desert, I might misinterpret the blue patch as an oasis, but that doesn’t mean my impression isn't real. I'm seeing something real - not an oasis, but a refracted image of the sky. So, even if we're all just projections of a computer simulation, like The Matrix, the simulation itself has a structure that gives it a kind of reality, and it is OUR reality, the one we need to be able to navigate. (The philosopher Robert Nozick had a famous argument along these lines.)
by Marina Galperina, Hopes and Fears | Read more:
Image: Erin Lux
We asked renowned neuroscientists, physicists, psychologists, technology theorists and hallucinogen researchers if we can ever tell whether the "reality" we are experiencing is "real" or not. Don't worry. You're going to be ok.
Jessica L. Nielson, Ph.D., Department of Neurosurgery, Postdoctoral Scholar, University of California, San Francisco (UCSF), Brain and Spinal Injury Center (BASIC)

When it comes to psychosis, things like auditory hallucinations can seem very real. Ultimately, our experiences are an interpretation of a set of electrical signals in our brains. We do the best to condense all those signals into what we perceive to be the world around us (and within us), but who is to say that the auditory hallucinations that schizophrenics experience, or the amazing visual landscapes seen on psychedelics are not some kind of bleed through between different forms of reality? I don't think there is enough data to either confirm or deny whether what those people are experiencing is "real" or not.
Sean Carroll, Cosmologist and Physics professor specializing in dark energy and general relativity, research professor in the Department of Physics at the California Institute of Technology
How do we know this is real life? The short answer is: we don't. We can never prove that we're not all hallucinating, or simply living in a computer simulation. But that doesn't mean that we believe that we are.
There are two aspects to the question. The first is, "How do we know that the stuff we see around us is the real stuff of which the universe is made?" That's the worry about the holographic principle, for example -- maybe the three-dimensional space we seem to live in is actually a projection of some underlying two-dimensional reality.
The answer to that is that the world we see with our senses is certainly not the "fundamental" world, whatever that is. In quantum mechanics, for example, we describe the world using wave functions, not objects and forces and spacetime. The world we see emerges out of some underlying description that might look completely different.
The good news is: that's okay. It doesn't mean that the world we see is an "illusion," any more than the air around us becomes an illusion when we first realize that it's made of atoms and molecules. Just because there is an underlying reality doesn't disqualify the immediate reality from being "real." In that sense, it just doesn't matter whether the world is, for example, a hologram; our evident world is still just as real.
The other aspect is, "How do we know we're not being completely fooled?" In other words, forgetting about whether there is a deeper level of reality, how do we know whether the world we see represents reality at all? How do we know, for example, that our memories of the past are accurate? Maybe we are just brains living in vats, or maybe the whole universe was created last Thursday.
We can never rule out such scenarios on the basis of experimental science. They are conceivably true! But so what? Believing in them doesn't help us understand any features of our universe, and puts us in a position where we have no right to rely on anything that we did think is true. There is, in short, no actual evidence for any of these hyper-skeptical scenarios. In that case, there's not too much reason to worry about them.
The smart thing to do is to take reality as basically real, and work hard to develop the best scientific theories we can muster in order to describe it. (...)
George Musser Jr., Contributing editor for Scientific American magazine, Knight Science Journalism Fellow at MIT 2014–2015
The holographic principle doesn’t mean the universe isn't real. It just means that the universe around us, existing within spacetime, is CONSTRUCTED out of more fundamental building blocks. "Real" is sometimes taken to mean "fundamental", but that's a very limited sense of the term. Life isn't fundamental, since living things are made from particles, but that doesn’t make it any less real. It’s a higher-level phenomenon. So is spacetime, if the holographic principle is right. I talk about the holographic principle at length in my book, and I discuss the distinction between fundamental and higher-level phenomena in a recent blog post.
The closest we come in science to "real" or "objective" is intersubjective agreement. If a large number of people agree that something is real, we can assume that it is. In physics, we say that something is an objective feature of nature if all observers will agree on it - in other words, if that thing doesn’t depend on our arbitrary labels or the vagaries of a given vantage point ("frame-independent" or "gauge-invariant", in the jargon). For instance, I'm not entitled to say that my kitchen has a left side and a right side, since the labels "left" and "right" depend on my vantage point; they are words that describe me more than the kitchen. This kind of reasoning is the heart of Einstein's theory of relativity and the theories it inspired.
Could we all be fooled? Yes, of course. But there's a practical argument for taking intersubjective agreement as the basis of reality. Even if everyone is being fooled, we still need to explain our impressions. An illusion, after all, is entirely real - it is the INTERPRETATION of the illusion that can lead us astray. If I see a smooth blue patch in the desert, I might misinterpret the blue patch as an oasis, but that doesn’t mean my impression isn't real. I'm seeing something real - not an oasis, but a refracted image of the sky. So, even if we're all just projections of a computer simulation, like The Matrix, the simulation itself has a structure that gives it a kind of reality, and it is OUR reality, the one we need to be able to navigate. (The philosopher Robert Nozick had a famous argument along these lines.)
by Marina Galperina, Hopes and Fears | Read more:
Image: Erin Lux
Adobe's New Algorithm Can Erase Tourists From Your Photos In Real Time
[ed. Given the preceding post, the irony of this is not lost on me.]
From the masterminds who gave us a way to fake flawless photos of ourselves — and pretty much anything else our hearts desire — comes a new technology designed to enhance the pictures we take of landmarks, by ridding them of tourists.
Introducing "Monument Mode" by Adobe, an algorithm that can purportedly let you take a clear shot of anything, even in the most crowded locations.
"I've come here from India and I really want to take a photo of L.A.'s famous Hollywood sign, but I've been finding that very difficult to do," said Adobe engineer Ashutosh Jagdish Sharma while unveiling the app prototype in California last week.
"The problem is other tourists," he continued. "Whether it's the Tahj Mahal or the Eiffel Tower, it's always difficult to get that perfect monumental landmark shot thanks to other tourists who keep moving around and blocking the view." (...)
From the masterminds who gave us a way to fake flawless photos of ourselves — and pretty much anything else our hearts desire — comes a new technology designed to enhance the pictures we take of landmarks, by ridding them of tourists.

"I've come here from India and I really want to take a photo of L.A.'s famous Hollywood sign, but I've been finding that very difficult to do," said Adobe engineer Ashutosh Jagdish Sharma while unveiling the app prototype in California last week.
"The problem is other tourists," he continued. "Whether it's the Tahj Mahal or the Eiffel Tower, it's always difficult to get that perfect monumental landmark shot thanks to other tourists who keep moving around and blocking the view." (...)
According to Adobe, the feature is possible thanks to a new algorithm that can distinguish moving objects (like tourists and cars) from fixed ones (like the Grand Canyon).
"One click and those obstructions are gone for good," reads the caption of a "Sneak Peek" video published by the company after Sharma's presentation. (...)
Some have taken to pointing out that Adobe has already been offering the tools, namely Photoshop, needed to remove people from images for many years.
"What makes the new feature so interesting is the ability to generate the expected image in real time," explains the Daily Dot's AJ Dellinger. "It allows the photographer to take a picture of its subject as if no one else is around, making it easier to get the shot they want on the first try instead of continuously snapping and hoping one turns out."
by Lauren O'Neil, CBC News | Read more:
"One click and those obstructions are gone for good," reads the caption of a "Sneak Peek" video published by the company after Sharma's presentation. (...)
Some have taken to pointing out that Adobe has already been offering the tools, namely Photoshop, needed to remove people from images for many years.
"What makes the new feature so interesting is the ability to generate the expected image in real time," explains the Daily Dot's AJ Dellinger. "It allows the photographer to take a picture of its subject as if no one else is around, making it easier to get the shot they want on the first try instead of continuously snapping and hoping one turns out."
by Lauren O'Neil, CBC News | Read more:
Image: Milos Bicanski/Getty Images
Hangin' With Steve Berlin: The Story of Los Lobos
Speaking of doing a lot of different records and working with a lot of amazing songwriters, I own a ton of the records that you've done over the years. One, in particular, I'd like to ask you about is Paul Simon's Graceland. I obsessed over that thing when I was young. Do you have any recollections of working on it?
Oh, I have plenty of recollections of working on that one. I don't know if you heard the stories, but it was not a pleasant deal for us. I mean he [Simon] quite literally – and in no way do I exaggerate when I say – he stole the songs from us.
Really...
Yeah. And you know, going into it, I had an enormous amount of respect for the guy. The early records were amazing, I loved his solo records, and I truly thought he was one of the greatest gifts to American music that there was.
At the time, we were high on the musical food chain. Paul had just come off One Trick Pony and was kind of floundering. People forget, before Graceland, he was viewed as a colossal failure. He was low. So when we were approached to do it, I was a way bigger fan than anybody else in the band. We got approached by Lenny Waronker and Mo Ostin who ran our record company [Warner Bros.], and this is the way these guys would talk – "It would mean a lot to the family if you guys would do this for us." And we thought, "Ok well, it's for the family, so we'll do it." It sounds so unbelievably naïve and ridiculous that that would be enough of a reason to go to the studio with him.

So Paul was like, "Let's just jam," and we're like, "Oh jeez. Well alright, let's see what we can do." And it was not good because Louie wasn't comfortable. None of us were comfortable, it wasn't just Louie. It was like this very alien environment to us. Paul was a very strange guy. Paul's engineer was even stranger than Paul, and he just seemed to have no clue - no focus, no design, no real nothing. He had just done a few of the African songs that hadn't become songs yet. Those were literally jams. Or what the world came to know and I don't think really got exposed enough, is that those are actually songs by a lot of those artists that he just approved of. So that's kind of what he was doing. It was very patrician, material sort of viewpoint. Like, because I'm gonna put my stamp on it, they're now my songs. But that's literally how he approached this stuff.
I remember he played me the one he did by John Hart, and I know John Hart, the last song on the record. He goes, "Yeah, I did this in Louisiana with this zy decko guy." And he kept saying it over and over. And I remember having to tell him, "Paul, it's pronounced zydeco. It's not zy decko, it's zydeco." I mean that's how incredibly dilettante he was about this stuff. The guy was clueless.
Wow. You're kidding me?
Clue... less about what he was doing. He knew what he wanted to do, but it was not in any way like, "Here's my idea. Here's this great vision I have for this record, come with me."
About two hours into it, the guys are like, "You gotta call Lenny right now. You gotta get us out of this. We can't do this. This is a joke. This is a waste of time." And this was like two hours into the session that they wanted me to call Lenny. What am I going to tell Lenny? It was a favor to him. What am I going to say, "Paul's a fucking idiot?"
Somehow or other, we got through the day with nothing. I mean, literally, nothing. We would do stuff like try an idea out and run it around for 45 minutes, and Paul would go "Eh... I don't like it. Let's do something else." And it was so frustrating. Even when we'd catch a glimpse of something that might turn into something, he would just lose interest. A kitten-and-the-string kinda thing.
So that's day one. We leave there and it's like, "Ok, we're done. We're never coming back." I called Lenny and said it really wasn't very good. We really didn't get anything you could call a song or even close to a song. I don't think Paul likes us very much. And frankly, I don't think we like him very much. Can we just say, 'Thanks for the memories' and split?" And he was like, "Man, you gotta hang in there. Paul really does respect you. It's just the way he is. I'll talk to him." And we were like, "Oh man, please Lenny. It's not working." Meanwhile, we're not getting paid for this. There was no discussion like we're gonna cash in or anything like that. It was very labor-of-love.
Really...?
Yeah. Don't ask me why. God knows it would have made it a lot easier to be there.
And Lenny put you guys together thinking it would be a good match?
Well, "It would be good for the family." That was it. So we go back in the second day wondering why we're there. It was ridiculous. I think David starts playing "The Myth of the Fingerprints," or whatever he ended up calling it. That was one of our songs. That year, that was a song we started working on By Light of the Moon. So that was like an existing Lobos sketch of an idea that we had already started doing. I don't think there were any recordings of it, but we had messed around with it. We knew we were gonna do it. It was gonna turn into a song. Paul goes, "Hey, what's that?" We start playing what we have of it, and it is exactly what you hear on the record. So we're like, "Oh, ok. We'll share this song."
Good way to get out of the studio, though...
Yeah. But it was very clear to us, at the moment, we're thinking he's doing one of our songs. It would be like if he did "Will the Wolf Survive?" Literally. A few months later, the record comes out and says "Words and Music by Paul Simon." We were like, "What the fuck is this?"
We tried calling him, and we can't find him. Weeks go by and our managers can't find him. We finally track him down and ask him about our song, and he goes, "Sue me. See what happens."
What?! Come on...
That's what he said. He said, "You don't like it? Sue me. You'll see what happens." We were floored. We had no idea. The record comes out, and he's a big hit. Retroactively, he had to give songwriting credit to all the African guys he stole from that were working on it and everyone seemed to forget. But that's the kind of person he is. He's the world's biggest prick, basically.
So we go back to Lenny and say, "Hey listen, you stuck us in the studio with this fucking idiot for two days. We tried to get out of it, you made us stay in there, and then he steals our song?! What the hell?!" And Lenny's always a politician. He made us forget about it long enough that it went away. But to this day, I do not believe we have gotten paid for it. We certainly didn't get songwriting credit for it. And it remains an enormous bone that sticks in our craw. Had he even given us a millionth of what the song and the record became, I think we would have been – if nothing else - much richer, but much happier about the whole thing.
Have you guys seen him since then?
No. Never run into him. I'll tell you, if the guys ever did run into him, I wouldn't want to be him, that's for sure.
That's an amazing story. I can't believe I never heard it before.
We had every right and reason to sue him, and Lenny goes, "It's bad for the family." When we told the story in that era, when this was going down, we were doing interviews and telling the truth. And Lenny goes, "Hey guys, I really need you to stop talking about it. It's bad for the family."
Amazing. Talk about bad for the family.
I know. Again, it's just so incredible how naïve we were back then. You can't even imagine that era of music when you'd actually listen to your record company president who told you to shut up because "it's bad for the family." Now, I'd tell him to go fuck himself.
That's our version of it. I'd love to hear Paul's version of it.
But he's much richer now and could probably give a fuck about it. It's still one of those things where I've not forgiven anyone involved in it. It still remains. I haven't let it go, as you can tell. It was just so wrong and so rude, and so unnecessary. It is an amazing moment in our history.
by Scott Caffrey, JamBase | Read more:
Image: Los Lobos
The Future of Design: Interview With Neri Oxman
[ed. Basically function over form but with more inter-disciplinary integration. A convergence of architecture, biology, engineering, neuroscience, 3-D printing, algorythmic modeling, and newly developed synthetic materials.]
A former medical student at Hebrew University and the Technion Institute of Technology, Oxman made a final stop at the renowned Architectural Association in London before joining MIT as a presidential research fellow and PhD candidate in Design Computation in 2006. Since arriving, she has undertaken a startlingly large amount of design research driven by her belief that design should be focused on the local environment rather than form driven. By using software to create new composite materials Oxman has been able to replicate the processes of nature, creating materials that are able to adapt to light, load, skin pressure, curvature and other ecological elements. Oxman spoke to MATTER about her vision for the future of design and material construction and the projects she’s developing that might just help us get there a little bit quicker.
Andrew Dent: Why do you feel that your area of expertise and investigation has garnered so much interest from such a wide audience?
Neri Oxman: Thank you, this is humbling. Public interest is motivated by zeitgeist, but it also creates it. The ideas that I have promoted – often through small physical case studies- are evocative of an idealistic ambience in which emerging science and technology becomes a hopeful and hum anistic medium for broad cultural transformation. In this context, I think my work is communicative on several levels.
I try not to take on new work unless it potentially contributes to a general understanding of the way in which to create it. That is for me where all the fun is. So the work touches upon issues in design process that are applicable not only to architectural and design practice, but also to emerging areas in material engineering and digital fabrication. When exploring an integrated design approach that seeks to overlap with, and operate across, multiple fields design becomes innovative, richer, and more capable of broad impact. Design, ultimately, is about an ability to work through constraints. In the case of MATERIALECOLOGY these constraints are geared towards recreating the tools and technologies that are inherently related to the type of product at hand. In this way, the very instrumentality of design becomes a frontier of innovation.
For example, with Beast – a prototype for a chaise lounge – the aim was to completely rethink the Modernist project and consider physical behavior, not form, as the first article of production. Beast relates material properties to a general loading profile that would be exerted on the chaise when in use. Stiff and soft polymers are distributed in areas of high and low pressure respectively, and the height of each cushioning bump, as it appears on the surface area of the chaise, corresponds to our body’s pressure map, providing for comfort and support. The design process in this case was completely tailored to a new way of thinking about design and full scale digital fabrication, an industry still in its infancy. Imagine Mary Shelley’s mythical creatures; like them, Beast is an organic-like entity created synthetically by the incorporation of physical parameters into digital generation protocols. It is a Performative Chaise. It exploits and advances technological frontiers to create a form of responsive architecture. Here form follows force not unlike the way Mother Nature has it.
Secondly, I believe the work advocates a new approach to the culture of green; let me explain. So-called sustainable design standards relate to architectural functional components that are somewhat old-fashioned in their construction methods: think bricks, or the hegemony of metal. In the future, composites are going to occupy a much broader portion of the building industry and concrete will be something of the past. Currently, there exists a separation between materials used for structural engineering and materials used for environmental comfort. In my work I attempt to invent ways in which to integrate between the two.
Monocoque is a good example in which material properties are modified according to specific structural and environmental constraints. French for single shell, Monocoque, stands for a construction technique, which supports structural load using the object's external skin. Contradictory to the traditional design of building skins that distinguishes between internal structural frameworks and non-bearing skin elements; this approach promotes heterogeneity and variation of material properties. The project demonstrates the notion of a structural skin using a Voronoi pattern, the density of which corresponds to multi-scalar loading conditions. The distribution of shear-stress lines and surface pressure is embodied in the allocation and relative thickness of the vein-like elements built into the skin. The model was 3-D printed using the Poly-jet matrix technology which allows for the assignment of structural properties to multiple 3-D printed materials. This technology provides for an ability to print parts and assemblies made of multiple materials within a single build, as well as to create composite materials that present preset combinations of mechanical properties. Now imagine printing muscle that way.
Another significant aspect of the work lies in its capacity to translate physical phenomena into art or to express form-generating formulae as building prototypes. My contribution to Paola Antonelli’s Design and the Elastic Mind exhibition at MoMA provided for such an opportunity. A series of four projects entitled Natural Artifice examined the relation between physical material properties and performance criteria such as structural load, heat transfer and insulation. All models were, in essence, expressions of forms front-loaded with data emulating their behavior a-priori to fabrication..
Raycounting for instance, examines the relation between light and geometry. A computational algorithm determines the curvature of the artifact for shading purposes depending on the location of one or multiple light sources relative to the desired location of shading.
Finally, I hope the work opens a new scale between architecture and material science. Designers should not always accept off-the-shelf materials but realize that they have the power to design and manipulate material behavior. This shift points towards a new way to classify materials and a whole newly dynamic notion of the idea of a materials library.
by Andrew H. Dent, Material Connextion | Read more:
Image: The Beast, Neri Oxman
Labels:
Architecture,
Critical Thought,
Environment,
Science,
Technology
Subscribe to:
Posts (Atom)