What is Generative Pre-trained Transformer 3 aka GPT-3?
GPT-3 sounds less exciting and scary than Deep Blue, Watson, or Palantir. The technical-sounding combination of letters and numbers is supposed to be nothing less than a big step towards “Artificial Intelligence”. A general artificial intelligence should not only perform a few, specialized tasks but “think” like a human.
Programming code on-demand:
In an application example, GPT-3 becomes a web designer. Instead of entering the programming code, the software only has to dictate what is to be seen on the website.
In a demo, the developer Sharif Shameem orders the program to create a “button that looks like a watermelon”, the HTML and CSS code appears a few seconds later on the screen. A “table with the richest countries in the world with names and GDP as columns” is not only generated as a code by GPT-3 but also filled with data.
Bluff with a blog post:
GPT-3 can generate not only computer programs but also texts. It is difficult for people to recognize that these come from a computer program. The computer scientist Manuel Araoz, who already has access to GPT-3, writes in a blog entry about his experiments with the new algorithm.
He let loose GPT-3 on the bitcointalk.org forum, where the posts generated by the software met with approval. It is only a matter of time before talented amateurs start training similar models and use them for fake news, advertising, politics, or propaganda.
The plot twist only comes at the end of the blog entry. The entire experience report comes from GPT-3 itself, Araoz clarifies. He only gave the software his biography, the title of the blog entry, three keywords, and a short summary in two sentences, GPT-3 generated the rest himself.
Source code is not published:
In any case, the source code for GPT-3 should not be made publicly available. Instead, the organization wants to offer a programming interface for a fee.
On the one hand, this should ensure the financing of a “General AI”. The models behind GPT-3 are also extremely large and require a lot of computing power - only larger companies can afford to operate them.
In addition, Open AI wants to prevent the misuse of technology. The concern does not come from anywhere. The predecessor GPT-2, which was also able to complete sentences, reproduced stereotypes that probably appeared in the source texts with which the model was trained.
The future is admirable and frightening:
The future is admirable and frightening, as it should be. The application uses neural networks to find patterns and learn languages. What is notable here is not the programming capacity. Many applications already do it in a similar way, and even more capable. What is striking here is that the GPT-3 was not trained to program.
What is also remarkable and extremely dangerous is that ingesting this data, this “intelligence” has also learned to reproduce stereotypes.
When he started shooting tweets, he quickly adopted the worst examples of social media regarding the role of women, racism, and the promotion of violence. This artificial intelligence needs to feed on gigantic data packets. Given its size, it is impossible to control what it eats, so the risk is always there.