PHILIP K. DICK was living a few miles north of San Francisco when he wrote Do Androids Dream of Electric Sheep?, which envisioned a world where artificially intelligent androids are indistinguishable from humans. The Turing Test has been passed, and it’s impossible to know who, or what, to trust.

A version of that world will soon be a reality in San Francisco. Google announced this week that Duplex, the company’s phone-calling AI, will be rolled out to Pixel phones in the Bay Area and a few other US cities before the end of the year. You might remember Duplex from a shocking demonstration back in May, when Google showed how the software could call a hair salon and book an appointment. To the receptionist on the other end of the line, Duplex sounded like a bona fide person, complete with pauses and “ums” for more human-like authenticity.

Duplex is part of a growing trend to offload basic human interaction to robots. More and more text messages are being automated: ride-sharing apps text you when your car is there; food-delivery apps text you when your order has arrived; airlines text you about delays; political campaigns send you reminders to vote. Smartphones predict the words you might want to complete your own texts; recently, Google’s Gmail has attempted to automate your side of the conversation in emails as well, with smart responses and suggested autocomplete.

These efforts fall short of full automation; they are suggestions you must act on. But even that may soon be a thing of the past: On Wednesday, Bloomberg reported that Google Android creator Andy Rubin’s company, Essential Products, is going all-in to develop a phone that “will try to mimic the user and automatically respond to messages on their behalf.”

Convenient? Maybe. If your pharmacy texts to ask if you want a prescription refilled, it would be nice—I suppose?—if your phone would just respond “yes.” But when you couple automated tasks with human impersonation, you get into uncomfortable territory.

The tech is now good enough to trick us, and the only way we’ll know we’re talking to a bot is because the bot’s creators told it to say so.

It’s … weird. As human interaction moved increasingly online—from email and chat apps to social media networks—the question of authenticity has always been a concern. Back when AIM was the new thing, parents worried who their teenagers were actually talking to in chat rooms. (Rightfully so! I was likely texting with some creeps back then, I imagine.) With the introduction of artificially intelligent chatbots, and their growing sophistication, the worries take on a different tenor. No longer are we just concerned that the people we’re communicating with are who they say they are; now we also need to worry whether they are even persons at all.

Privacy experts have worried about this since the beginning of the bot invasion. “The emergence of social bots, as means of entertainment, research, and commercial activity, poses an additional complication to online privacy protection by way of information asymmetry and failures to provide informed consent,” wrote social scientist Erhardt Graeff in a 2013 paper that argued for legislation on social bots that would protect user privacy. In the wake of disinformation campaigns fomented, at least in part, by bots, California passed a law last week requiring online chatbots to disclose that they aren’t human.

Consent was a big concern after the Google Duplex demos in May. In the first demo, when the “woman” called to make an appointment for a haircut for a client, she didn’t identify herself as a robot. The person on the phone seemed to have no idea she was talking to a robot. Was that ethical? To trick her? If the content of the conversation was similar to what it would have been with a real human, does it matter?

Google’s way of dealing with that question is to build a disclosure into the operational version of Duplex. When people in San Francisco finally use the AI assistant to make appointments, Duplex will alert the person it’s calling that it’s a bot. At least at first, users will only be able to direct the service to make reservations at restaurants that don’t have online booking.

All of that is a very important and welcome start. “I am very heartened that Google is focused on transparency because that is a fundamental ethical principle,” says Stanford innovation ethicist Susan Liautaud. She is also happy that Google is rolling Duplex out slowly, and has indicated that it is open to feedback.

Transparency is important, and it’s great that Google is pledging to do the right thing. But the point is it doesn’t have to. The tech is now good enough to trick us, and the only way we’ll know we’re talking to a bot is because the bot’s creators told it to say so.

This gets at the reason ethics in tech is such a crucial issue today: Right now, with a lack of laws and regulations around emerging technology, consumers are at the mercy of big tech companies like Google. But as Liautaud is quick to point out, “this is not a Google-specific problem.” She adds: “Right now we have this unprecedented and unpredictable scattering of power, so we are dependent on these players to lead with innovation in ethics along with their innovation in technology.” In other words, we have to trust them—and at the moment, trust in these companies is at an all-time low.

Trust will be an even trickier question when you consider that this technology could—and likely will—evolve into territory more intimate than scheduling appointments. Though I personally believe that interacting with strangers throughout the day while out in the world, whether on the phone or at the checkout line in the grocery store, is a wonderful and important part of living in a society, I understand that other people feel differently. Some people are averse to talking to strangers on the phone. For them, this technology could be a definite benefit. You can argue that not much of our humanity is lost if we allow a bot to, say, deal with calling our dentist’s office to schedule a cleaning. But what if we let a bot wish our mom a happy birthday over text? We will need to figure out where on that slippery slope is human connection in some essential way ceded to bots that neither care about nor actually represent us.

Again, this is something we’re already moving toward. Take Facebook’s personal reminders. We used to have to remember the special days of our loved ones and friends, or make a point of marking it in ink on our calendars. Now we have Facebook for that. Is there really a difference between automating the reminder and automating the actual greeting?

Of course, some people have been able to ask human assistants to send their spouses flowers since the invention of the 9-to-5 work week. But now that cop-out will become easier, and the person doing the purchasing of said flowers won’t even be a person. Those small human moments where we pick up the phone and call someone and dictate a card are a part of modern life. The “objectivity” of robots—arguably something that doesn’t exist, since all bots are imbued with the subjectivity of their creators—stamps out of an already impersonal interaction anything resembling humanity. “Here are your red roses, which I have ordered for you because Wikipedia informs me they are the classic anniversary gift. Love, an AI on behalf of your loving spouse,” our cards might as well soon read.

It isn’t just our interactions with other humans that could be affected. “I worry a lot about how we’re building this world that’s supposed to be for convenience, comfort, and speed, but in fact makes us feel like someone is always listening, whether they are or not,” says Ryan Calo, a professor of law at the University of Washington who has studied the impacts of anthropomorphic robots on society. He notes there’s a whole field of research into “persuasive computing,” which shows humans react to being around anthropomorphic robots the same way they react to being around other humans.

Technologies like Duplex, Calo says, are “kind of the descendants of Microsoft’s Bob and Clippy. We are finally getting it right, and with that finally getting right, making it sound human with the pause and the ums, this does create these dangers. Because if you can take interpersonal interaction and you can scale it and exquisitely manipulate it, then the possibilities are legion.” All of which is to say, this kind of automated and realistic human impersonation raises both ethical issues of trust and philosophical questions like what does it mean to have relationships if those relationships are conducted mostly by machines.

There are also practical worries. It’s not hard to imagine this kind of technology being targeted or manipulated by hackers, spammers, trolls, and other bad actors. If 2016 Twitter is prologue, the hypothetical scenario of AI phones being conscripted into a disinformation campaign is not unreasonable. “Who is responsible for the behavior of these bots?” asks Liautaud. “If something really goes wrong, is it the owner of the phone? Is it Google? Does the machine have any responsibility? The developer?”

These are the same questions that have to be asked of driverless cars and facial recognition tech. There is no answer yet. A central issue for those grappling with the ethics of AI is who has the authority make decisions. “And,” says Liautaud, “how do we allocate that responsibility for the consequences for those decisions?”

Even when it comes to more mundane, everyday life, such features could also introduce logistical challenges. If your phone is responding to emails and texts so you can focus on the more fun parts of life, how do you keep track of all the agreements your phone has made on your behalf? The integration with reminders and calendars will need to be robust and seamless, lest a phone intended to provide convenience ends up producing yet another a ream of data to parse and track.

That sounds annoying, and creepy. If I’m going to have to keep track of my emails and texts and appointments anyway, I’d rather not also be forced to face the uncanny valley while I do it.