Don’t Make Artificial Intelligence Artificially Stupid in the Name of Transparency

Artificial intelligence systems are going to crash some of our cars, and sometimes they’re going to recommend longer sentences for black Americans than for whites. We know this because they’ve already gone wrong in these ways. But this doesn’t mean that we should insist—as many, including the European Commission’s General Data Protection Regulation, do—that artificial intelligence should be able to explain how it came up with its conclusions in every non-trivial case.

Demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid. And given the promise of the type of AI called machine learning, a dumbing-down of this technology could mean failing to diagnose diseases, overlooking significant causes of climate change, or making our educational system excessively one-size-fits all. Fully tapping the power of machine learning may well mean relying on results that are literally impossible to explain to the human mind.

Machine learning, especially the sort called deep learning, can analyze data into thousands of variables, arrange them into immensely complex and sensitive arrays of weighted relationships, and then run those arrays repeatedly through computer-based neural networks. To understand the outcome—why, say, the system thinks there’s a 73 percent chance you’ll develop diabetes or there’s a 84 percent chance that a chess move will eventually lead to victory—could require comprehending the relationships among those thousands of variables computed by multiple runs through vast neural networks. Our brains simply can’t hold that much information.

pop over to this site
pop over to this web-site
pop over to this website
prev
previous
published here
read
read full article
read full report
read here
read more
read more here
read moreÂ…
read review
read the article
read the full info here
read this
read this article
read this post here
read what he said
recommended reading
recommended site
recommended you read
redirected here
reference
related site
resource
resources
review
right here
secret info
see
see here
see here now
see it here
see page
see post
see this
see this here
see this page
see this site
see this website
sell
she said
site
site web
sites
sneak a peek at these guys
sneak a peek at this site
sneak a peek at this web-site
sneak a peek at this web-site.
sneak a peek at this website
sneak a peek here
source
[source]
sources tell me
speaking of
special info
straight from the source
such a good point
super fast reply
take a look at the site here
talking to
talks about it
that guy
the
the advantage
the full details
the full report
the original source
their explanation
their website
these details
they said
this
this article
this contact form
this content
this guy
this hyperlink
this link
this page
this post
this site
this website
top article
total stranger
try here
try these guys
try these guys out
try these out
try this
try this out

There’s lots of exciting work being done to make machine learning results understandable to humans. For example, sometimes an inspection can disclose which variables had the most weight. Sometimes visualizations of the steps in the process can show how the system came up with its conclusions. But not always. So we can either stop always insisting on explanations, or we can resign ourselves to maybe not always getting the most accurate results possible from these machines. That might not matter if machine learning is generating a list of movie recommendations, but could literally be a matter of life and death in medical and automotive cases, among others.

Explanations are tools: We use them to accomplish some goal. With machine learning, explanations can help developers debug a system that’s gone wrong. But explanations can also be used to to judge whether an outcome was based on factors that should not count (gender, race, etc., depending on the context) and to assess liability. There are, however, other ways we can achieve the desired result without inhibiting the ability of machine learning systems to help us.

Here’s one promising tool that’s already quite familiar: optimization. For example, during the oil crisis of the 1970s, the federal government decided to optimize highways for better gas mileage by dropping the speed limit to 55. Similarly, the government could decide to regulate what autonomous cars are optimized for.

Say elected officials determine that autonomous vehicles’ systems should be optimized for lowering the number of US traffic fatalities, which in 2016 totaled 37,000. If the number of fatalities drops dramatically—McKinsey says self-driving cars could reduce traffic deaths by 90 percent—then the system will have reached its optimization goal, and the nation will rejoice even if no one can understand why any particular vehicle made the “decisions” it made. Indeed, the behavior of self-driving cars is likely to become quite inexplicable as they become networked and determine their behavior collaboratively.

Leave a Reply

Your email address will not be published. Required fields are marked *