Published on February 27, 2019 by

OpenAI has the entire AI community debating its decision to not release the fully trained version of its powerful new text generator model dubbed GPT-2. I’m going to explain how GPT-2 works using code, math, and animations. We’ll discuss its potential applications (both good and bad), ways of preventing misuse, and at the end of the video I’ll give my take on whether OpenAI was justified in doing so. The transformer architecture is quickly replacing recurrent networks for sequence learning, and OpenAI’s GPT-2 is the latest example of using it at scale. Enjoy!

Code for this video:
https://github.com/openai/gpt-2

Please Subscribe! And Like. And comment. Thats what keeps me going.

Want more education? Connect with me here:
Twitter: https://twitter.com/sirajraval
instagram: https://www.instagram.com/sirajraval
Facebook: https://www.facebook.com/sirajology

More learning resources:
https://medium.com/@asierarranz/i-hav…
https://blog.openai.com/better-langua…
http://jalammar.github.io/illustrated…
https://mchromiak.github.io/articles/…

Web Demo of GPT-2:
https://www.askskynet.com/

Gradient Descent:
https://www.youtube.com/watch?v=XdM6E…

Fakebox:
https://machinebox.io/docs/fakebox

Privacy tools:
https://github.com/OpenMined/PySyft/t…

Join us at the School of AI:

Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/

Please support me on Patreon:
https://www.patreon.com/user?u=3191693

Signup for my newsletter for exciting updates in the field of AI:
https://goo.gl/FZzJ5w

Category Tag