Like other types of artificial intelligence, generative AI learns how it should act based on historical data. Instead of just categorizing or identifying data like other AI, it produces brand-new content based on that training, such as texts, images, and even development code. The most advanced generative AI is GPT-4.
The most well-known application of generative AI is ChatGPT, a chatbot that OpenAI shared late last year. The large language model’s claim to fame was its ability to take in a text prompt and then produce a human-like response.
Since then, Open AI has further developed the technology and expanded its capabilities. The company recently unveiled GPT-4, a newer model which it dubs “multimodal”. This means it is capable of understanding not only text but also images.
Since the release of ChatGPT, the company behind is made a couple very lucrative deals to guarantee its future. Microsoft became a major invester and invested over 10 billion dollars in OpenAI and released it’s own Bing AI chatbot and announced a CoPilot product to be incorporated in its Microsoft 365 services. Salesforce made a deal with OpenAI as well an introduced Einstein GPT.
Although many people endorse the advancements in generative AI, some industry titans are playing devil’s advocate. They insist that AI labs instantly halt training of AI systems more powerful than GPT-4 for at least six months.
Tip: “AI development should be paused to implement security measures”
According to an open letter, “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Despite having co-founded OpenAI in 2015 (before leaving the board in 2018), Elon Musk, CEO van Tesla, SpaceX and Twitter is one among the thousands of signatories to the open letter.
There is no way to ignore the generative AI race between technology giants. But what exactly is generative AI capable of? And what are the differences between OpenAI’s releases and how will they influence businesses?
What can generative AI do?
Three general categories encompass generative AI’s capabilities. First and foremost, it can generate materials, texts, ideas, images, videos and designs. It can create new things based on historical learning and connecting the dots. In doing so, it can develop new, one-of-a-kind outputs in a variety of media, such as blogs, art, video advertisement, a novel protein with antimicrobial properties or a new design for a computer chip.
Second, it aids in the personalization of interactions by creating content and information for a specific audience. For example, chatbots for personalized customer experiences or targeted ads based on patterns in a customer’s behavior.
Finally, it can increase productivity. Manual or repetitive activities, such as coding or summarizing large documents, can be speed up.
In a similar vein, generative AI can take notes during a virtual conference. It can make slide presentations or personalize emails. In recent product announcements, Microsoft and Google both showed features for their Office suite.
Misuses and hallucinations
Despite all of the advantages of this technology, there is concern about possible abuse. Some companies are training generative AI models on massive amounts of data obtained from the internet, including copyrighted documents. As a result, ethical AI techniques have become an organizational requirement.
Read Also: Google introduces principles for responsible AI
Educational institutions have raised concerns that students would submit AI-drafted essays, undermining the hard work needed for them to learn. Researchers in cybersecurity have also voiced concern that generative AI could enable bad actors, including governments, to spread far more misinformation than they previously could.
Apart from these worries, the technology itself is also prone to errors. AI confidently touts factual inaccuracies, which researchers dub “hallucinations.” Some responses appear believable at first glance but are not justified by the bot’s training data. This means that the technology gives an answer, which it repeatedly insists is true, without any internal awareness that the answer was a product of its own imagination.
Hallucinations and erratic responses such as professing love to a user are just a few reasons why companies have sought to test the technology before making it accessible to all.
GPT-3, GPT-3.5, GPT-4, what is the difference?
OpenAI formally released the most recent version of its language model system, GPT-4, on March 13, 2023. The release comes with a paid subscription granting users access to the tool. For now, complete access to the model’s capabilities remains restricted. The free version of ChatGPT continues to use the GPT-3.5 model.
Unlike GPT-3.5, the latest model takes images as well as text instructions as input. Users can, for example, enter a hand-drawn sketch into the AI chatbot, which converts the sketch into a functional web website. The fact that GPT-3 could only accept text as input and return greatly restricted its use cases.
While image recognition in GPT-4 is still in its early stages, users can ask the software to describe what’s there to see in an image. For people with vision impairments it’s great technology, they can ask the the AI to describe what is happening in an image. Demonstrations by OpenAI show GPT-4 reading out a map, describing a clothing item’s design and demonstrating how to use a piece of exercise equipment.
In addition to image identification, OpenAI taught GPT-4 a wide range of prompts. Many of these are malicious. The most recent iteration of GPT is 82% less likely than GPT-3 to respond to requests for prohibited content, and it is 40% more likely to generate accurate responses, according to Open AI.
Because GPT-4 is less likely to react to malicious requests, it could be safer for users overall. However, ignoring restricted material is not 100% guaranteed, so the AI might still give insensitive replies to input prompts.
GPT-4 is far more advanced than predecessor
Like other types of artificial intelligence, generative AI learns how it should act based on past data. Instead of just categorizing or identifying data like other AI, it produces brand-new content based on that training, such as texts, images, and even computer codes. Despite this, the new bot should provide a far better overall experience than its predecessor.
Furthermore, OpenAI claims that GPT-4 can analyze 25,000 words at once. This is 8 times as many as GPT-3. The most recent model is also better at providing factual information. It has much more sophisticated reasoning abilities than models 3.5 and 3.
OpenAI demonstrated this in a case study where researchers posed the same scenario to GPT-4 and GPT 3.5. The bots had to find a 30-minute window in which three given schedules overlap. Both AIs were able to provide a solution, but GPT-4’s was more accurate and straightforward. This could indicate that it will provide more reliable and fact-based answers than its forerunner.
Additionally, GPT-4 outperforms GPT-3 on common machine learning benchmarks by up to 16%. It can handle multilingual tasks better as well, making it more accessible to non-English speakers.
Transformative possibilities of GPT-4 in the corporate world
There are numerous advantages that companies can gain from using generative AI. Because GPT-4 can manage 8 times more text than GPT-3, it may be better equipped to handle larger documents, making it more efficient in specific work environments.
It could be used to increase labor efficiency and personalize customer experiences. It can also accelerate R&D through generative design and develop new business models, to name a few applications.
In a report released, Goldman Sachs economists projected that the latest wave of AI could automate as many as 300 million full-time jobs worldwide. This means that 18% of global work could be computerized.
Tip: The jobs most at risk to generative AI like ChatGPT
They asserted that advanced economies would be more impacted than emerging markets. This is partly due to the belief that white-collar employees are more vulnerable than manual laborers.
Administrative workers and attorneys are expected to be the most affected, according to the economists. On the other hand, physically demanding or outdoor jobs (such as construction and repair work) will likely experience “little effect.”
The implications of generative AI for business leaders are enormous, and many businesses have already launched generative AI initiatives. Businesses are developing tailored generative AI models by tweaking them with their own data.
Read Also: “AI will cause significant global labor market disruption”
GPT-4 industry applications
The near future seems to hold the greatest potential for growth for four specific sectors: consumer, finance, software development and health care.
Generative AI can customize experiences, content, and product suggestions for consumer marketing campaigns. In the field of finance, it can create individualized investment suggestions, examine market data and put various hypotheses to the test in order to suggest new trading approaches.
OpenAI claims that GPT-4 “passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.”
Although many consider the technology to be a potential threat for developers, some employers disagree. Companies consider permitting candidates to use it in interviews – since they’ll be required to be on the job.
However, there are legitimate concerns, and some large tech firms have prohibited their engineers from using it. They fear that their proprietary code will end up in the hands of OpenAI to train future models. Nevertheless, the tool has a lot to give.
Tip: Several companies forbid employees to use ChatGPT
For the past six months, Microsoft Research and OpenAI have been researching the potential uses of GPT-4 in health care and medical applications to better understand its basic capabilities, limitations and risks to human health.
What was found was that AI can significantly speed up the biopharmaceutical industry’s R&D cycles. This is because it can produce data on millions of potential cures for a given disease and evaluate their efficacy. Applications in medical and health care documentation, data interoperability, diagnosis, research, and education are just a few examples.
A special report published in AI in Medicine on March 30th explains the benefits, limitations and risks of GPT-4 as an AI chatbot for medicine. The authors investigated several other AI chatbots for medical applications as well. Google’s LaMDA and GPT-3.5 are two of the most significant.
Notably, neither LaMDA, GPT-3.5 nor GPT-4 were explicitly trained for health care or medical applications. Their training regimens aimed to achieve general-purpose cognitive capability. The authors of the special report point out that GPT-4 is still in development. They say their paper only scratches the surface of its capabilities.
Listing some possible applications, they say that it can write software for data processing and visualization. It can plainly explain explanation-of-benefits notices and laboratory tests for readers who are unfamiliar with the medical language used in each. It can also compose emotionally supportive notes to patients.
Acknowledging the rate at which this technology is developing, the time to initiate internal innovation is now. Business leaders in every sector should consider adopting generative AI into production systems as soon as possible.
If the transformative possibilities of generative AI is ignored, these companies could be at a significant and possibly unbridgeable cost and creativity disadvantage.