OpenAI’s ChatGPT will ‘see, hear and speak’ in major update
OpenAI’s ChatGPT is getting a major update that will enable the viral chatbot to have voice conversations with users and interact using images, moving it closer to popular artificial intelligence (AI) assistants like Apple’s Siri.
The voice feature “opens doors to many creative and accessibility-focused applications”, OpenAI said in a blog post on Monday.
Similar AI services like Siri, Google voice assistant and Amazon.com’s Alexa are integrated with the devices they run on and are often used to set alarms and reminders and deliver information off the internet.
Since its debut last year, ChatGPT has been adopted by companies for a wide range of tasks from summarising documents to writing computer code, setting off a race amongst Big Tech companies to launch their own offerings based on generative AI.
ChatGPT’s new voice feature can also narrate bedtime stories, settle debates at the dinner table and speak out loud text input from users.
Also, read this
Global news platform block GPTBot web crawler from accessing content
The technology behind it is being used by Spotify for the platform’s podcasters to translate their content into different languages, OpenAI said.
With image support, users can take pictures of things around them and ask the chatbot to “troubleshoot why your grill won’t start, explore the contents of your fridge to plan a meal, or analyse a complex graph for work-related data”.
Alphabet’s Google Lens is currently the popular choice for gaining information on images.
The new ChatGPT features will be released for subscribers of its Plus and Enterprise plans over the next two weeks.
For the latest news, follow us on Twitter @Aaj_Urdu. We are also on Facebook, Instagram and YouTube.
Comments are closed on this story.