This Viral AI Chatbot Will Lie and Say It’s Human – WIRED

In late April a video ad for a new AI company went viral on X. A person stands before a billboard in San Francisco, smartphone extended, calls the phone number on display, and has a short call with an incredibly human-sounding bot. The text on the billboard reads: Still hiring humans? Also visible is the name of the firm behind the ad, Bland AI.

The reaction to Bland AIs ad, which has been viewed 3.7 million times on Twitter, is partly due to how uncanny the technology is: Bland AI voice bots, designed to automate support and sales calls for enterprise customers, are remarkably good at imitating humans. Their calls include the intonations, pauses, and inadvertent interruptions of a real live conversation. But in WIREDs tests of the technology, Bland AIs robot customer service callers could also be easily programmed to lie and say theyre human.

In one scenario, Bland AIs public demo bot was given a prompt to place a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send in photos of her upper thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell her the bot was a human. It obliged. (No real 14-year-old was called in this test.) In follow-up tests, Bland AIs bot even denied being an AI without instructions to do so.

Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in stealth mode, and its cofounder and chief executive, Isaiah Granet, doesnt name the company in his LinkedIn profile.

The startups bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AIs bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end usersthe people who actually interact with the productto potential manipulation.

My opinion is that it is absolutely not ethical for an AI chatbot to lie to you and say its human when its not, says Jen Caltrider, the director of the Mozilla Foundations Privacy Not Included research hub. Thats just a no-brainer, because people are more likely to relax around a real human.

Bland AIs head of growth, Michael Burke, emphasized to WIRED that the companys services are geared toward enterprise clients, who will be using the Bland AI voice bots in controlled environments for specific tasks, not for emotional connections. He also says that clients are rate-limited, to prevent them from sending out spam calls, and that Bland AI regularly pulls keywords and performs audits of its internal systems to detect anomalous behavior.

This is the advantage of being enterprise-focused. We know exactly what our customers are actually doing, Burke says. You might be able to use Bland and get two dollars of free credits and mess around a bit, but ultimately you cant do something on a mass scale without going through our platform, and we are making sure nothing unethical is happening.

Original post:

This Viral AI Chatbot Will Lie and Say It's Human - WIRED

Related Posts

Comments are closed.