These unnerving chats may be less than they appear, however.

“Having said that, AI going sentient is still very much sci-fi, nothing more and nothing less.

BothOpenAI and Googleexplicitly provide disclaimers acknowledging the potential for their chatbots to generate inaccurate information.

AI chatbot concept with speach bubbles overlaying a text conversation on a screen.

AI chatbot concept.Vertigo3d / Getty Images

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,” OpenAI wrote in ablog post.

The strange conversations sometimes generated by AI are the product of science, but mysteries remain to be uncovered.

“AI systems operate on a scale of data comprehensibility far beyond human capacity,” he added.

An AI portrayed in human shape with it’s brain lit up with light trails.

Artificial intelligence concept.metamworks / Getty Images

“Given this, there are times when AI model responses are unpredictable, seem bizarre, and surreal.

The AI is producing something that is not real, rational, or necessarily relevant.

And that is the hallucination.

“So it’s not that AI has consciousness and is seeing something that’s unreal.

The AI is producing something that is not real, rational, or necessarily relevant.

And that is the hallucination.”

“Thus, hallucinations are something we need to be prepared for,” he added.

This is the price of variability and naturalness in responses.”

metamworks / Getty Images

AI modelstend to hallucinate most when asked questionsthey aren’t trained on, Sergiienko said.

“Another example is exploitation through jailbreak promptsmanipulation of the model’s responses through carefully crafted input.”

Preventing odd conversations by AI, like those captured on Reddit, isn’t easy.

In an ideal scenario, AI models could be retrained to correct their mistakes.

However, retraining comes with high costs and time requirements, Narayan said.

“But even with extensive training and adjustments, hallucinations can still occur,” Sergiienko said.