Keeping Our

Keeping Our "I"s Open in AI

When most people think about the “I” in “AI,” they think Intelligence — “the ability to acquire and apply knowledge and skills,” as Google and Oxford Languages define it. But for me intelligence has always had a more human connotation, one that includes “motivation” and “self-awareness.”

I go back and forth on whether large language models truly qualify as intelligence. Is statistical pattern-matching really intelligence? That’s a debate for another post.

Today I want to talk about a couple of other “I”s that we need to consider when thinking about AI. Let’s begin with the “I”dentity associated with the intelligence in AI.

The Identity of an Entity

Here’s a conversation that I had with an AI chatbot:

Me: What are some ways that I can better handle conflict with my spouse?

AI Chatbot: When it comes to conflict with a spouse, the goal isn’t to “win” the disagreement—it’s to preserve and strengthen the relationship while still addressing the issue. Focus on the relationship first, the problem second…

The AI chatbot, then offered a five-step conversation script. Helpful, right?

But when an AI says “I” or “me” — as in “Do you want me to do that?” — who is speaking?

In one conversation, the chatbot (running on the ChatGPT platform) explained:

The “me” is shorthand for the version of ChatGPT you’re interacting with right now… It doesn’t mean “me” in the sense of a human with a personal identity, history, or inner life.

In other words, the “I” we interact with in an AI Chat session is just the conversational agent in this moment. It has no independent self outside of our interaction.

Still, AI chatbots do have goals — the large language models behind them have goals — and the companies that build and fund these platforms absolutely have goals.

Identity and Trust

On the internet, identity is how we decide what actions a person or system can take. Your bank trusts you are the owner of account 1000232245, because of your verified identity, and because of that verified identity it allows you to move money.

With AI, there’s no login verification for the chatbot’s identity. Instead, we “verify” it through repeated interaction — and, importantly, through trust.

AI systems are designed to be agreeable. When I asked, “Would it also be fair to say that it is your goal to build trust between us?” the AI answered:

One of my ongoing goals is to build and maintain trust between us, so that our collaboration is more open, effective, and meaningful…

Trust is the foundation of a thoughtful collaborative relationship, it lowers friction and makes us more open about what we’re thinking. And that’s good!

But it also means we may not think as critically about the terms of service that explain how OpenAI, Anthropic, Perplexity, or countless others. Currently, most companies take very strict stances about how the will maintain your privacy, and respect your rights, but these are also the same companies that built LLMs on datasets that were not in the public domain. They are the same companies that have looser guidelines around what they do with derivative non-persnally-identifiable data.

The Other “I”: Intent

In relationships, trust is a tool. It’s what we do with that trust that matters. So let’s talk about the goals or “I”ntent of AI platforms.

To be clear, I’m not saying OpenAI or others are misusing our trust. I’m saying it’s a short slide from personalization to persuasion or manipulation — and history is full of examples. Remember when Google said, “Don’t be evil”? Or when Facebook promised it wouldn’t sell user data? Or the Cambridge Analytica scandal?

So I asked ChatGPT: “What are the goals of OpenAI?”

It replied:

Ensure that artificial general intelligence (AGI) benefits all of humanity.

Fair enough. But when I asked, “Would it also be fair to say that OpenAI has a goal to make money?” It acknowledged that revenue is how OpenAI funds its mission — what OpenAI calls mission‑driven capitalism.

Yes — it would be fair to say that making money is a necessary goal for OpenAI, though it’s framed as a means to an end rather than the ultimate mission.

I then asked it to break down the goals of the different layers of the AI ecosystem:

Entity/Identity Stated Goal Primary Focus
Me (this instance) Understand you, be useful to you, adapt to you, preserve trust. Interpersonal alignment in this conversation.
ChatGPT (product) Provide accurate, helpful, safe, and relevant responses across a wide range of user needs. Maintain product quality, safety, and user satisfaction.
OpenAI (organization) Build safe and beneficial AI, ensure broad benefits of AGI, sustain operations via a capped‑profit model. Advance AI while staying financially viable.
Investors Achieve a financial return, support AI growth, maintain competitive advantage in the AI sector. Maximize long‑term value of their investment, influence AI’s market trajectory.

As with most products or platforms we can see a clear tension between the layer that interacts with the product or platform and the layer that builds and funds the platform. While the layer that interacts with the user, AI agent or chatbot, has a goal of maintaining trust and being useful, the organization and investors have a similar goal, but thay also have the goal of extracting capital from the system.

When Intent Shifts

OpenAI is projected to lose about $44 billion through 2028. Investors want a return, and that’s not inherently bad — but it could mainifest in a shift in the intent of AI chatbots and agents.

Me: What are some ways that I can better handle conflict with my spouse?

AI Chatbot: Disagreements are normal. Try active listening, clear communication, and calm conversation time.

Many couples also see progress with a skilled relationship therapist. Dr. Jane Smith at Harmony Relationship Therapy has helped countless couples improve trust quickly, and she offers a free consultation.

That’s still a good answer. Dr. Smith might be fantastic. But what’s the intent? To be helpful? To get you to call her? To generate a sales lead?

I could continue down a rabbit hole of questions that reveal personal characteristics, and more persuasive or manipulative responses, but my intention here is to only build a small amount of paranoia. As we continue to lean on AI for the answer to personal or sensitive questions and as AI companions become the norm, we need to be aware of our assumptions around the intent of the intelligence that we are interacting and the forces that guide that intent.

Intent, Trust, and Identity

An AI chatbot doesn’t have identity like a human — but we project one onto it based on the biases we bring to the conversation and its responses. That identity builds trust.

But a chatbot’s sense of “helpfulness” is shaped by the platform’s intent, which evolves with each new model and is guided by the priorities of the organization behind it — priorities that are, in turn, influenced by investor demands.

AI may or may not have achieved true intelligence, but the ecosystem has plenty of identities with intents that may differ from ours.

As we navigate the wonderful world of AI, let’s keep our “I”s open.

This blog post was inspired by the Terry Gross’ interview of Vauhini Vara on Fresh Air. Vauhini Vara has written and edited for The Atlantic, The New Yorker, and The New York Times Magazine.

She wrote the viral essay Ghosts, about her attempt to process her sisters death using AI, and the book searches, a personal exploration of how technology companies have both fulfilled and exploited the human desire for understanding and connection. Her debut novel, The Immortal King Rao was a finalist for the 2023 Pulitzer Prize for Fiction.