This year is likely to be remembered for the Covid-19 pandemic and for a significant presidential election, but there is a new contender for the most spectacularly newsworthy happening of 2020: the unveiling of GPT-3. As a very rough description, think of GPT-3 as giving computers a facility with words that they have had with numbers for a long time, and with images since about 2012.
The core of GPT-3, which is a creation of OpenAI, an artificial intelligence company based in San Francisco, is a general language model designed to perform autofill. It is trained on uncategorized internet writings, and basically guesses what text ought to come next from any starting point. That may sound unglamorous, but a language model built for guessing with 175 billion parameters — 10 times more than previous competitors — is surprisingly powerful.
The eventual uses of GPT-3 are hard to predict, but it is easy to see the potential. GPT-3 can converse at a conceptual level, translate language, answer email, perform (some) programming tasks, help with medical diagnoses and, perhaps someday, serve as a therapist. It can write poetry, dialogue and stories with a surprising degree of sophistication, and it is generally good at common sense — a typical failing for many automated response systems. You can even ask it questions about God.
Imagine a Siri-like voice-activated assistant that actually did your intended bidding. It also has the potential to outperform Google for many search queries, which could give rise to a highly profitable company.
GPT-3 does not try to pass the Turing test by being indistinguishable from a human in its responses. Rather, it is built for generality and depth, even though that means it will serve up bad answers to many queries, at least in its current state. As a general philosophical principle, it accepts that being weird sometimes is a necessary part of being smart. In any case, like so many other technologies, GPT-3 has the potential to rapidly improve.
It is not difficult to imagine a wide variety of GPT-3 spinoffs, or companies built around auxiliary services, or industry task forces to improve the less accurate aspects of GPT-3. Unlike some innovations, it could conceivably generate an entire ecosystem. (...)
Like all innovations, GPT-3 involves some dangers. For instance, if prompted by descriptive ethnic or racial words, it can come up with unappetizing responses. One can also imagine that a more advanced version of GPT-3 would be a powerful surveillance engine for written text and transcribed conversations. Furthermore, it is not an obvious plus if you can train your software to impersonate you over email. Imagine a world where you never know who you are really talking to — “Is this a verified email conversation?” Still, the hope is that protective mechanisms can at least limit some of these problems.
We have not quite entered the era where “Skynet goes live,” to cite the famous movie phrase about an AI taking over (and destroying) the world. But artificial intelligence does seem to have taken a major leap forward. In an otherwise grim year, this is a welcome and hopeful development. Oh, and if you would like to read more here is an article about GPT-3 written by … GPT-3.
by Tyler Cowan, Bloomberg | Read more:
Image: Wall-E
[ed. From my casual surfing adventures, it does seem like GPT-3 has exploded recently. For example: OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless (MIT Technology Review).]
The core of GPT-3, which is a creation of OpenAI, an artificial intelligence company based in San Francisco, is a general language model designed to perform autofill. It is trained on uncategorized internet writings, and basically guesses what text ought to come next from any starting point. That may sound unglamorous, but a language model built for guessing with 175 billion parameters — 10 times more than previous competitors — is surprisingly powerful.
The eventual uses of GPT-3 are hard to predict, but it is easy to see the potential. GPT-3 can converse at a conceptual level, translate language, answer email, perform (some) programming tasks, help with medical diagnoses and, perhaps someday, serve as a therapist. It can write poetry, dialogue and stories with a surprising degree of sophistication, and it is generally good at common sense — a typical failing for many automated response systems. You can even ask it questions about God.
Imagine a Siri-like voice-activated assistant that actually did your intended bidding. It also has the potential to outperform Google for many search queries, which could give rise to a highly profitable company.
GPT-3 does not try to pass the Turing test by being indistinguishable from a human in its responses. Rather, it is built for generality and depth, even though that means it will serve up bad answers to many queries, at least in its current state. As a general philosophical principle, it accepts that being weird sometimes is a necessary part of being smart. In any case, like so many other technologies, GPT-3 has the potential to rapidly improve.
It is not difficult to imagine a wide variety of GPT-3 spinoffs, or companies built around auxiliary services, or industry task forces to improve the less accurate aspects of GPT-3. Unlike some innovations, it could conceivably generate an entire ecosystem. (...)
Like all innovations, GPT-3 involves some dangers. For instance, if prompted by descriptive ethnic or racial words, it can come up with unappetizing responses. One can also imagine that a more advanced version of GPT-3 would be a powerful surveillance engine for written text and transcribed conversations. Furthermore, it is not an obvious plus if you can train your software to impersonate you over email. Imagine a world where you never know who you are really talking to — “Is this a verified email conversation?” Still, the hope is that protective mechanisms can at least limit some of these problems.
We have not quite entered the era where “Skynet goes live,” to cite the famous movie phrase about an AI taking over (and destroying) the world. But artificial intelligence does seem to have taken a major leap forward. In an otherwise grim year, this is a welcome and hopeful development. Oh, and if you would like to read more here is an article about GPT-3 written by … GPT-3.
by Tyler Cowan, Bloomberg | Read more:
Image: Wall-E
[ed. From my casual surfing adventures, it does seem like GPT-3 has exploded recently. For example: OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless (MIT Technology Review).]