AI Can Recognize Images, But Text Has Been Tricky—Until Now

In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by feeding a neural network millions of labeled images from a database called ImageNet. It ushered in an exciting phase for computer vision, as it became clear that a model trained using ImageNet could help tackle all sorts of image-recognition problems. Six years later, that’s helped pave the way for self-driving cars to navigate city streets and Facebook to automatically tag people in your photos.

In other arenas of AI research, like understanding language, similar models have proved elusive. But recent research from, OpenAI, and the Allen Institute for AI suggests a potential breakthrough, with more robust language models that can help researchers tackle a range of unsolved problems. Sebastian Ruder, a researcher behind one of the new models, calls it his field’s “ImageNet moment.”

The improvements can be dramatic. The most widely tested model, so far, is called Embeddings from Language Models, or ELMo. When it was released by the Allen Institute this spring, ELMo swiftly toppled previous bests on a variety of challenging tasks—like reading comprehension, where an AI answers SAT-style questions about a passage, and sentiment analysis. In a field where progress tends to be incremental, adding ELMo improved results by as much as 25 percent. In June, it was awarded best paper at a major conference.

like it
click here for more
find out here now
this hyperlink
site here
discover here
click here for info
try this website
look at here
Visit Your URL
see this website
visit this page
Click Here
check this
browse around these guys
redirected here
visit this site right here
have a peek at this website
right here
why not try this out
article source
visite site
web link
you could try this out
my latest blog post
find out this here
wikipedia reference
find more information
continue reading this
this post
official website
go to these guys
learn the facts here now
Related Site
Click This Link
Visit This Link
you can try here
linked here
visit homepage
you can find out more
see this site
additional resources
pop over to this site
view it now
their website
special info
you could try these out
Check Out Your URL
my explanation
helpful site
More Info
go right here
this article

Dan Klein, a professor of computer science at UC Berkeley, was among the early adopters. He and a student were at work on a constituency parser, a bread-and-butter tool that involves mapping the grammatical structure of a sentence. By adding ELMo, Klein suddenly had the best system in the world, the most accurate by a surprisingly wide margin. “If you’d asked me a few years ago if it was possible to hit a level that high, I wouldn’t have been sure,” he says.

Models like ELMo address a core issue for AI-wielding linguists: lack of labeled data. In order to train a neural network to make decisions, many language problems require data that’s been meticulously labeled by hand. But producing that data takes time and money, and even a lot of it can’t capture the unpredictable ways that we speak and write. For languages other than English, researchers often don’t have enough labeled data to accomplish even basic tasks.

“We’re never going to be able to get enough labeled data,” says Matthew Peters, a research scientist at the Allen Institute who led the ELMo team. “We really need to develop models that take messy, unlabeled data and learn as much from it as possible.”

Luckily, thanks to the internet, researchers have plenty of messy data from sources like Wikipedia, books, and social media. The strategy is to feed those words to a neural network and allow it to discern patterns on its own, a so-called “unsupervised” approach. The hope is that those patterns will capture some general aspects of language—a sense of what words are, perhaps, or the basic contours of grammar. As with a model trained using ImageNet, such a language model could then be fine-tuned to master more specific tasks—like summarizing a scientific article, classifying an email as spam, or even generating a satisfying end to a short story.

Leave a Reply

Your email address will not be published.