OpenAI’s GPT-4 to provide responses through AI-generated videos: Report
The GPT-4 iteration of OpenAI‘s large language model (LLM), which might produce material powered by AI, could soon be released. GPT-4 will be released next week, according to Microsoft Germany CTO Andreas Braun, who also said that it will include multimodal models with the ability to reply to user questions using music, video, and graphics. According to reports, this distinguishes GPT-4 from its predecessor, ChatGPT, which could only respond to customer inquiries through text.
In addition to its multimodal capabilities, GPT-4 is anticipated to respond more quickly than ChatGPT and with more human-like characteristics. According to reports, OpenAI is also creating a mobile application that uses GPT-4. ChatGPT is only presently accessible as a language model on the web.
The integration of GPT-4 in Bing has been rumoured, although neither Microsoft nor OpenAI have confirmed this. To deliver real-time results, Bing search currently makes use of GPT-3 and GPT-3.5 as well as its own proprietary technology called Prometheus.
The usage of GPT-4 in Bing conversation has been rumoured, although neither Microsoft nor OpenAI have confirmed this. To deliver real-time results, Bing search currently makes use of GPT-3 and GPT-3.5 as well as its own proprietary technology called Prometheus.
Also, it is said that there have recently been controversies around the Bing search assistant, which may be the reason why Microsoft and OpenAI are keeping quiet about adding GPT-4 to Bing. Nonetheless, GPT-4 might be a useful addition to the Bing platform given its capacity to produce multimedia content and offer quicker, more human-like responses to user inquiries.
Also Read – OpenAI’s CEO Sam Altman ‘little scared’ of ChatGPT