YouTube’s algorithm is back in the news for all the wrong reasons

February 5, 2018 Sam DeBrule


Awesome, not awesome.

“….Alonso Martinez [technical director at Pixar Animation Studios] pointed out that we have an extremely accurate technological approach to pain quantification in the form of MRI scanning, but that this is expensive and highly invasive. We can also imagine it being unnecessary in an era where machines can parse vast troves of data to build predictive models of once unquantifiable things, such as suffering reflected in the microexpressions of a face…” Michael Byrne, Editor Learn More on Motherboard >

#Not Awesome
“The current language in the killer robot debate suggests that those weapons are capable of acting without meaningful human control, and that their creation and use is somehow distinct from other sorts of collective actions. It also suggests that potential harm arising from that creation and use may be morally unattributable to those who create and use them. This is not the sort of moral detachment we should foster in our technology and military communities, especially in relation to what is perhaps the gravest and most consequential of all human activities: war.” — Michael Robillard, Iraq War veteran and postdoctoral research fellow Learn More on The New York Times >

What we’re reading.

1/ YouTube’s recommendation algorithm “does not appear to be [optimizing] for what is truthful, or balanced, or healthy for democracy,” and the impact it has on our political system could be hugely underestimated. Learn Why on The Guardian >

2/ For all the progress that has been made in the field of AI lately, it still feels like we’re far from some major breakthroughs — like ones that will save humans from doing work we can’t stand, and helping us do a better job of the things that we’re not so good at. Learn Why on WIRED >

3/ There’s a counter-intuitive narrative emerging that autonomous trucks will create more trucking jobs than they eliminate. However, it’s unclear whether or not this narrative is just wishful thinking from a company that has a lot riding on this narrative becoming true. Learn Why on The Atlantic >

4/ Many regulators would argue that in order to prevent injustices caused by AI systems, we must make sure that algorithms can “explain themselves.” The problem is, keeping algorithms simple enough to be explicable means slowing down massive potential progress. Learn Why on Medium >

5/ Amazon shakes up the business world when it announces it’s partnership with Berkshire Hathaway and JP Morgan to tackle healthcare — but it turns out China’s giant companies may have a major AI advantage in the space. Learn Why on The New York Times >

6/ Much has been made of the machine learning algorithms that power Google Translate replacing the need for human translators, but there’s probably a long way off. Learn Why on The Atlantic >

7/ For every fleet of autonomous vehicles on the car, you should expect a remote human operator in some faraway office trying to make sure that everything moves smoothly. Learn Why on WIRED >

What we’re building.

We think the future workplace will be one that’s led by people who excel at finding and acting quickly with their information.

But today, our work is fractured. Millions of unnecessary steps stand between us and the information we need to do our jobs.

That’s why we’re building Journal, a companion app that connects your work apps. It helps you get to all your information fast so you can use your time and energy on the things that matter.

Join our waitlist, and we’ll give you early access.

Links from the community.

“Meet the Company Trying to Democratize Clinical Trials with AI” submitted by Avi Eisenberger (@aeisenberger). Learn More on WIRED >

“Andrew Ng officially launches his $175M AI Fund” submitted by Samiur Rahman (@samiur1204). Learn More on TechCrunch >

“Neva is now Astound. Here’s why.” submitted by Dan Turchin (@dturchin). Learn More on Astound >

YouTube’s algorithm is back in the news for all the wrong reasons was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story.

Previous Article
Moving from a default trust to default skeptic society
Moving from a default trust to default skeptic society

What the future of technology does to the future of trust.I’ve been spending a lot of time thinking about h...

Next Article
Machines for creative enablement
Machines for creative enablement

SourceThis Week’s Special GuestA little while back, Howie Liu of Airtable, wrote a piece about machines bei...