OpenAI has the entire AI community debating its decision to not release the fully trained version of its powerful new text generator model dubbed GPT-2. I’m going to explain how GPT-2 works using code, math, and animations. We’ll discuss its potential applications (both good and bad), ways of preventing misuse, and at the end of the video I’ll give my take on whether OpenAI was justified in doing so. The transformer architecture is quickly replacing recurrent networks for sequence learning, and OpenAI’s GPT-2 is the latest example of using it at scale. Enjoy!
Code for this video:
Please Subscribe! And Like. And comment. Thats what keeps me going.
More learning resources:
Web Demo of GPT-2:
Join us at the School of AI:
Join us in the Wizards Slack channel:
Please support me on Patreon:
Signup for my newsletter for exciting updates in the field of AI: