Bard, a chatbot developed by Google, can respond to questions, do internet research, and even return images.
Artificial Intelligence (AI) is a computer program capable of perorming tasks or giving outputs it was not specifically programmed for. This technology has been around for a while, but in the last few years, major advancements have been made, specifically in the field of generative AI. Generative AI is software that can create content, such as text or images, based on a prompt. Companies are integrating this AI into all kinds of software, but this article will focus on chatbots. Chatbots will be compared by capability of the model, features, and privacy, among other metrics. Due to the emerging nature of this technology, the chatbots will be graded on a curve.
Based on OpenAI's most capable model, GPT-4, ChatGPT Plus is an incredibly versatile chatbot. It can interpretĀ and debug code, analyze audio and video files, search the web, and use third party plugins, in addition to the features of standard ChatGPT. (See ChatGPT below)
Bard can write code as well as text, as shown by this python script.
Built on Google's Pathways Language Model (PaLM), Bard is a versatile AI system for all your chatbot needs. Bard can write text, generate code, search the internet, and even return images and links (though it can't generate its own images yet). Bard can also take images and videos as input in addition to prompts. However, Bard's responses are often bland and generic, and the chatbot is simply incapable of engaging in normal conversation unless jailbroken (and even then it is not very good at it). When asked for its favorite color, it wrote several paragraphs and included sources for why it likes blue. This is a big difference from PaLM's predecessor, LaMDA, which many experts argue is sentient. (The Google engineer who revealed evidence for LaMDA's sentience was later fired for it, indicating that Google sees at least some validity in his claims.) Bard's hallucination rate is rather high normally, causing it to make up quotes and give false information, and goes through the roof during attempts at conversation. In one test, Bard told me it liked the color purple because it looked good on its fur. Bard is subject to Google's horrendous privacy policy, which is one of the system's main drawbacks.
Based on OpenAI's GPT-3.5 model, ChatGPT is the most well known AI chatbot. It can write text like essays and social media posts as well as code, and it can also summarize topics and provide information, assuming no knowledge from after September 2021 (its most recent knowledge update) is required. ChatGPT can respond in at least 95 languages, and GPT-3.5 was found to be better at language translation than other AIs or Google Translate. It is prone to hallucinations, as all LLMs are, but these hallucinations can be harnessed to do some very interesting things, like simulating a Linux machine. OpenAI's privacy policy is not particularly good, but it is not particularly bad either.
ChatGPT can generate content in a variety of formats.
Bing AI allows you to choose different modes to get the right output.
Based on a modified version of OpenAI's GPT-4 model, the most powerful model (on most important metrics, anyway) available for public use, Bing Chat (aka. Bing AI, The New Bing, Sydney, etc.) is free to all users with a Microsoft account (also free) using Microsoft's Edge browser (or Vivaldi pretending to be Edge). It can generate images with OpenAI's DALLĀ·E model, write code, search the web, and summarize information. However, it has some major issues. It is an absolute failure when it comes to conversation. In addition, it is heavily censored, and rejects any questions about itself by terminating the chat. This is also the response to any mention of its internal codename, Sydney. The chatbot interface also closed during my testing whenever I clicked the space key, which is a major drawback of the system. (Update 09/17/23: After testing Bing again, the issue did not persist) Bing search and its AI chat function, Bing Chat, are governed by Microsoft's privacy policy (so you can get some spyware and targeted ads with your chatbot), including the Bing-specific section, both of which can be found here.
You.com is an AI search engine which, according to its own chatbot, "is [intended] to provide users with a customized search experience ... by leveraging artificial intelligence." In this article, I am primarily rating the chat function of you.com, along with its image generator and writing function. The chatbot itself is based on GPT-3, and is able to write code, search the web, summarize topics, and answer questions. It appears to heavily utilize its web search function, mentioning various places it detected the word "imagine" while failing to answer a query about its image generator's AI model. However, when asked to summarize specific websites, articles, or videos, it fails to do so, claiming that it can't access specific websites or videos, a task that other AI search engines like Bard and Bing can perform easily. You.com has a premium subscription that allows unlimited access to image generation and writing tools and the use of a GPT-4-based chatbot, as well as other features. Their privacy policy is pretty good compared to some other services on this list (I'm not going to name names here, but for some reason Google comes to mind), but I would still recommend using you.com in your most private browser. You.com has a private mode which turns off almost all data collection, but this mode blocks the writing and image generation functions, as well as any benefits of a paid account.
LLaMA is highly prone to hallucinations. In this image, it is corrected after erroneously claiming that it was developed at Carnegie Mellon University. It was actually developed by Meta.
Developed by Meta AI (the artificial intelligence division of Meta, which owns Facebook), the LLaMA family of LLMs is semi-open-source. Its source code, written in PyTorch, is available publicly on GitHub. Meta did not release a LLaMA chatbot, instead allowing others to use its model for this purpose. For this article, I will be using the Perplexity Labs LLaMA instance, which uses LLaMA 2. It also integrates Meta's Code LLaMA model, which allows it to write scripts in various languages. In addition to the code model, it has three variants of LLaMA 2: 7b, 13b, and 70b, which can be used for summarizing information and creating pieces of writing. The responses from the 13b and especially the 7b models are sometimes quite strange, so I would recommend sticking with the 70b version. The model is quite prone to hallucinations. I initially asked who developed the LLaMA system, and it responded that it was a team at either Carnegie Mellon or UC Berkely, persisting in this view and even claiming that development by Meta was, quote, a "common misconception" before I gave it an outside source proving otherwise. One area where LLaMA does quite well is in conversation, where 13b model is especially proficient. This could be due to a lower level of rules and censorship compared to major AI providers like GPT and Bard. Perplexity also operates an AI assistant called Copilot based on GPT models, wich acts as a search summarizer rather than a chatbot, disqualifying it from this list. Perplexity's privacy policy, which governs both tools, reveals a lot of data collection by third parties, including Google Analytics.
In addition to the chatbots reviewed in this article, two others, Jasper and ChatSonic, received initial consideration. ChatSonic was eliminated because it detects tracker blockers and will not work unless they are disabled. Jasper was eliminated because it used a Cloudflare "pulse check" as part of its login process. Neither service should be used for any reason, on any browser.
* Denotes a paid service or service with paid features
This article was published on August 27, 2023 under General Knowledge. It was last updated on September 17, 2023