How will using AI tools for Internet searches change humanity’s relationship with machines?

In recent years, technology has advanced to the point where it can provide personalized and interactive responses to user queries. The most notable development in this field is the creation of language models such as ChatGPT, which use natural language processing to provide user-like interactions. The technology has been adopted by some of the biggest players in the tech industry, including Google, Microsoft, and Baidu, to create conversational search engines that could change the relationship between humans and machines.

Microsoft's Bing uses the same technology as ChatGPT, which was developed by OpenAI in San Francisco, California. Announced on February 6, Bard, Google's AI-powered search engine, is currently being used by a small group of testers. Microsoft's version is now widely available, though a waiting list is required for unrestricted access. Baidu's ERNIE bot will launch in March.

The use of conversational search engines has changed the way people interact with technology. Instead of entering a query and receiving a list of results, users can now have a conversation with their search engine, ask follow-up questions, and receive personalized responses. This can make the search process more efficient and intuitive, especially for users who have difficulty expressing their queries.

On a positive note, the adoption of conversational search engines could also have a significant impact on the future of the human-machine relationship. As technology becomes more sophisticated and is able to mimic human interactions, it could blur the lines between humans and machines.

A 2022 study by a team at the University of Florida in Gainesville found that for participants who interacted with chatbots used by companies like Amazon and Best Buy, the more human-like they thought the conversation was, the more they trusted the chatbot. organize.

On the downside, the enhanced sense of trust could be problematic given that AI chatbots can make mistakes. For common questions, AI's answers may be trustworthy, but for questions it doesn't know the answer to, ChatGPT tends to create fictional answers. Some have speculated that, if discovered, these errors, rather than increasing trust, could cause users to lose confidence in chat-based searches.

Compounding the problem of inaccuracy is a relative lack of transparency. Typically, search engines show users their sources—lists of links—and let them decide what they trust. How AI-powered search will work is completely opaque, and if language models malfunction, hallucinate, or spread misinformation, this could have significant repercussions. If search bots make enough mistakes, far from increasing trust in their conversational abilities, they risk subverting users' perceptions of search engines as unbiased arbiters of truth.

Chatbot-driven searches blur the distinction between machines and humans, says Giada Pistilli, chief ethicist at Hugging Face, a data science platform in Paris. The platform promotes the responsible use of artificial intelligence. She worries about how quickly companies are adopting AI advances. "We always have these new technologies thrown at us without any control or educational framework of how to use them."


Customer-driven product development is the next era of collaborative automation.

Collaborative automation

Nearly 50% of phishing attacks in 2021 aimed at government employees were attempted credential theft

Credential theft

How to Create a Strong Password?

Strong Password
Hybrid cybersecurity

How is hybrid cybersecurity strengthened by AI, machine learning, and human intelligence?

Remove personal data

How to remove your personal data from people search sites?

Credential theft

Nearly 50% of phishing attacks in 2021 aimed at government employees were attempted credential theft

3D skin

Columbia researchers bioprint seamless 3D skin grafts for burn patients