The majority of business recognize that aggressive adoption of digital technologies is progressively critical to being competitive. Our research study shows that the top 10%of early adopters of digital technologies have actually grown at two times the rate of the bottom 25%, which they are utilizing cloud systems– not legacy systems– to enable adoption, a pattern we anticipate to speed up amongst industry leaders over the coming five years. Numerous laggard and middle-of-the-pack business, by comparison, are considerably undervaluing the cloud resources they will require in order to gain access to, power, or train a brand-new generation of smart applications presaged by advancements like GPT-3, a cutting edge natural language processing (NLP) tool.
The big developments in AI will have to do with language.
The 2010 s produced breakthroughs in vision-enabled technologies, from precise image searches on the web to computer vision systems for medical image analysis or for discovering malfunctioning parts in production and assembly, as we explained thoroughly in our book and research GPT3, established by OpenAI, indicates that the 2020 s will have to do with major advances in language-based AI tasks. Previous language processing designs used hand-coded rules (for syntax and parsing), analytical techniques, and, increasingly over the last years, synthetic neural networks, to carry out language processing. Synthetic neural networks can learn from raw data, requiring far less routine information labeling or feature engineering. GPTs (generative pre-trained transformers) go much deeper, relying on a transformer– an attention mechanism that finds out contextual relationships between words in a text. Scientists who were admitted to GPT-3 by means of a personal beta were able to induce it to produce short stories, songs, news release, technical manuals, text in the design of specific authors, guitar tabs, and even computer code.
GPT-3 is far from perfect. Its numerous flaws include sometimes producing nonsense or biased responses, incorrectly answering trivial questions, and generating plausible but false content. Even one of the leaders at OpenAI cautioned against over-hyping GPT-3. All of this suggests that much work remains to be done, but the writing, so to speak, is on the wall: a new stage of AI is upon us.
GPT-3 is only one of many advanced transformers now emerging. Microsoft, Google, Alibaba, and Facebook are all working on their own versions. These tools are trained in the cloud and are accessible only through a cloud application programming interface (API). Companies that want to harness the power of next generation AI will shift their compute workloads from legacy to cloud-AI services like GPT-3.
Next-gen apps will enable innovation across the enterprise.
These cloud-AI services will enable the development of a new class of enterprise apps that are more creative (or “generative” — the “G” in GPT) than anything we’ve seen before. They will make the process of synthesizing words, intentions, and information in language cheaper, which will make many business activities more efficient and stimulate the innovation and growth we see with early adopters.
Our analysis of more than 50 business-relevant proofs of concept (demos) of GPT-3 indicates that tomorrow’s leading-edge business apps will fall into at least three broad creative categories, all linked to language understanding: writing, coding, and discipline-specific reasoning.
GPT-3’s ability to write meaningful text based on a few simple prompts, or even a single sentence, can be uncanny. For instance, one of GPT-3’s private beta testers used it to produce a convincing blog on the subject of bitcoin. Among the demos we analyzed, there were apps for developing new podcasts, generating email and ad campaigns, suggesting how to run board meetings, and intelligently answering questions that would befuddle earlier language systems.
Based on prompts from humans, GPT-3 can also code — writing instructions for computers or systems. It can even convert natural language to programming language. In a natural language (English, Spanish, German, etc.), you describe what you want the code to do — such as develop an internal or customer-facing website. GPT then writes the program.
The ability to think about content, procedures, and knowledge in a scientific or technical field suggests other potentially fertile applications of GPT-3. It can answer chemistry questions — in one demo, it correctly predicted five of six chemical combustion reactions. It can autoplot graphs based on verbal descriptions, taking much of the drudgery out of tasks like creating presentations. Another beta tester created a GPT-3 bot that enables people with no accounting skills to generate financial statements. Another application can answer a deliberately difficult medical question and discuss underlying biological mechanisms. The app was given a description of a 10-year-old boy’s set of respiratory symptoms and was informed that he was diagnosed with an obstructive disease and given medication. Then it was asked what protein receptor the medication was likely to act on. The program correctly identified the receptor and explained that the boy had asthma and that it is typically treated with bronchodilators that act on that receptor.
This general reasoning potential across writing, coding, and science suggests that the use of cloud-powered transformers could become a meta-discipline, applicable across management sciences, data sciences, and physical and life sciences. Further, across non-technical jobs, cloud in combination with GPT3 will lower the barrier for scaling digital innovations. Non-technical staff will be able to use every day natural language rather than programming languages to build apps and solutions for customers.
Reimagined jobs will increase productivity.
In light of these coming changes, companies will not only need to rethink IT resources, but also human resources. They can begin by analyzing the bundles of tasks in current roles, uncovering specific tasks that the AI can augment, and unleashing technical and non-technical workers alike to innovate faster. Using the Occupational Information Network (O*NET), based on a U.S. government standard used to classify workers into occupational categories, we analyzed 73 job categories in 16 career clusters, and found that all clusters would be impacted by GPT-3. Digging into job categories, we found that 51 can be augmented or complemented by GPT-3 in at least one task, and 30 can use GPT-3 to complement two or more tasks.
Some tasks can be automated, but our analysis shows the larger opportunity will be around augmenting and amplifying human productivity and ingenuity. For example, communications professionals will see the majority of their work tasks involving routine text generation automated, while more critical communications like ad copy and social media messages will be augmented by GPT-3’s ability to help develop lines of thought. Company scientists might use GPT-3 to generate graphs that inform colleagues about the product development pipeline. Meanwhile, to augment basic research and experimentation, they could consult GPT-3 to distill the findings from a specific set of scientific papers. The possibilities across disciplines and industries are limited only by the imagination of your people.
Don’t get left behind.
The time to prepare is now. The next generation of enterprise apps won’t run on legacy systems, and companies will need to move to the cloud more aggressively than they are now. Wait-and-see won’t do. On October 1, OpenAI will launch GPT-as-a-service, making the API available to beta users. Leaders will be adopting and adapting GPT-3 within months, learning where it works best or where it doesn’t work at all. They will get a head start on redesigning jobs and on the issues of privacy, security, and social responsibility that surround all AI. And over the next two years, you can expect to see them putting all sorts of apps into production, finding opportunities for innovation that will put laggards even further behind.