Awesome, not awesome.
Let’s say we offer a $500 monthly plan in which you can tap a button and get access to transportation whenever you want it, and you get to choose your room-on-wheels experience. Maybe you want a cup of coffee on your way to work, or you want to watch the Warriors game later, so you’re in what’s basically a sports bar, with a bartender. If 0.5 percent of all miles driven are done on a ride-sharing app, and then if that number increases to, say, 80 percent, it’ll be such a huge industry shift that even if only 2 percent of that 80 percent is done by human drivers, it still represents a drastic increase in the number of human ride-sharing drivers.” — John Zimmer, Co-Founder and President of Lyft Learn More on The New York Times >
“She finds the notion of children empathizing with robots troublesome and quite possibly dangerous. Kids need connections to real people in order to mature emotionally. “Pretend empathy does not do the job,” she told me. If relationships with smart toys crowd out those with friends or family, even partially, we might see “children growing up without the equipment for empathic connection. You can’t learn it from a machine.” — Alexis C. Madrigal, Learn More on The Atlantic >
What we’re reading.
1/ When 20% of cars on the road are driven by algorithms, a single catastrophic human vs. machine collision could slow the adoption of autonomous vehicles for many years — but the upsides probably outweigh the downsides. Learn More on The New York Times >
2/ 100% car autonomy presents its own set of problems — but the most apparent? An utter lack of imagination. Social constructs, like cities, are so hardened in our minds that it’s difficult to picture a world that will be quite different from the one we all know. Learn More on The New York Times >
3/ All Algorithms are susceptible to manipulation by bad actors. Google, Facebook, and now YouTube are under fire for what the’ve let slip through their filters — but more stringent filters could make everything so much worse. Learn More on Polygon >
4/ In a world overloaded by information, content creators care above all else that you discover their information. How does it make you feel to know the content you consume exists for the sole reason that you discover it — and nothing deeper? Learn More on BLDGBLOG >
5/ The next time you swipe right in your data apps, you shouldn’t be so sure that you’re not connecting with an AI bot. Learn More on Motherboard >
6/ “Truck drivers” of tomorrow will operate the vehicles carrying their payload from hundreds of miles away from their phone or in front of their computers. Learn More on The Atlantic >
7/ Machine learning is actually to blame for adding the annoying “I” bug to your autocorrect in iOS. Learn More on Twitter >
What we’re building.
At work, our inboxes fill up quicker than we can empty them, key decisions are posted and immediately lost in Slack, and we forget the thousands of useful articles we’ve read that could help us do our jobs better. Information overload is wreaking havoc on our ability to process information, make decisions, and be productive.
We’re building Journal to help you remember and find all the important conversations, ideas, and knowledge you need to work faster.
Join our waitlist, and you’ll be one of the first people to get free access to our chrome extension. You’ll never forget important information or lose time recreating work again.
Where we’re going.
Highlight from “AI Ethics and the Race to Bring Pen and Paper Industries Online — A Conversation with Leo Polovets of Susa Ventures”
Sam: …There are obvious benefits to bringing these processes and datasets online — but I doubt this will always be for the good.
For example, earlier this year the Presidential Advisory Commission on Election integrity sent out a request for voter roll data (name, address, dob, political party, voter history, SSN) from states to “fully analyze vulnerabilities and issues related to voter registration and voting.”
One doesn’t have to think too hard to imagine how this data could be abused to suppress voting from specific populations.
Now that machine learning allows us to process and draw connections between ever larger data sets, how should developers decide which ones are ethical to work on?
Leo: To be honest I don’t have a good answer for this. Everyone has their own code of ethics, and individuals will have to make the personal call as to whether or not they’re okay with the potential impact of the algorithms they build.
Technology is like science in many ways. It has the potential for good. It has the potential for evil.
You can use Google to diagnose if you’re having a heart attack, or to find instructions on bomb making.
It’s up to people building products to think critically about the societal and ethical implications of their work. It’s the unfortunate truth that many things built with the best intentions can be abused for evil. It’s a tough question.
Links from the community.
“Job Security: What happens when AI takes over web design?” by Josh Aarons (@joshaarons). Learn More on Noteworthy >
“How Clarifai Buils Accurate and Unbiased AI Technology” submitted by Avi Eisenberger (@aeisenberger). Learn More on Clarifai >
“Business questions engineers should ask when interviewing at ML/AI companies” submitted by Samiur Rahman (@samiur1204). Learn More on Medium >
“Importance Of Bloomberg’s Article On Apple’s AI Headset Project” submitted by Carl DeBrule (@carldebrule). Learn More on Seeking Alpha >
Machine Learnings — Conversation with Leo Polovets of Susa Ventures was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story.