September, Friday 20, 2024

ChatGPT gains the ability to fetch current information


HkdCP1MplABkw0J.png

OpenAI, supported by Microsoft, has announced that its chatbot, ChatGPT, can now browse the internet to provide users with the latest information. Previously, the AI-powered system was trained using data only up until September 2021. This new feature allows premium users to ask the chatbot questions about current affairs and access news. OpenAI plans to make this feature available to all users in the near future. In addition to internet browsing, OpenAI also revealed that ChatGPT will soon be capable of having voice conversations with users. These announcements were made on the X platform, formerly known as Twitter. OpenAI and similar systems utilize vast amounts of data to generate human-like responses to user queries. These advancements are expected to greatly impact the way people search for information online. However, the lack of current event awareness in ChatGPT has deterred potential users. With this new browsing capability, users can treat ChatGPT as a source of the latest news, gossip, and current events. Tomas Chamorro-Premuzic, a professor of business psychology at University College London, believes the browsing feature will divert a significant number of user inquiries away from search engines and news outlets. However, he warns that relying solely on ChatGPT for information can be a double-edged sword. While it provides quick responses, the absence of reliable sourcing raises concerns about accuracy and potential misinformation. OpenAI has already faced scrutiny from US regulators due to the risk of ChatGPT generating false information. The Federal Trade Commission (FTC) sent a letter to OpenAI earlier this year, requesting information on how the company addresses risks to people's reputations. OpenAI's CEO expressed a willingness to collaborate with the FTC in response to these concerns. When questioned about the delay in allowing users to access up-to-date information, ChatGPT provided three reasons. Firstly, the development of language models is time-consuming and resource-intensive. Secondly, using real-time data poses the risk of introducing inaccuracies. Lastly, there are privacy and ethical concerns regarding accessing copyrighted content without permission.