In nearly three decades of working across digital, mobile, and AI systems, I've learned to pay attention, to spot patterns when the pace of advancement seems to surpass the clarity of its underlying purpose. Over the past few years, that gap between velocity and assessment has widened in ways I find increasingly hard to ignore. New AI capabilities arrive almost daily, yet the reasoning behind key design choices—what data is collected, why systems require certain access, how predictions influence behavior—often remains hidden. Some systems genuinely support human capability. Others introduce trade-offs without acknowledging them. A few extend into areas of autonomy or influence that most people never explicitly agreed to.
These shifts aren't inherently good or bad. They are directional. And when direction is unclear, the outcomes that eventually emerge are shaped less by intentional design and more by momentum. Early in developing what I named the Mobile Era of Intent—the threshold moment when technology can finally understand human intent rather than forcing human adaptation to machine logic—I found myself returning to the same directional questions: Why was this system designed this way? What intent shaped its architecture? What assumptions guided its behavior? And the question at the center of all the others: Should it be built or deployed in this form at all?
The more I explored these questions, the more I realized they weren't appearing in most AI discussions. Instead, what was evident across topic after topic was several key insights. Strategic leaders were under pressure to keep pace with competitors. Product teams were navigating complex new capabilities without clear frameworks for assessing impact. Policymakers were addressing harms only after they had emerged, rather than guiding choices before they solidified into standard practice. And individual users were asked to trust systems designed with incentives they had little ability to understand or influence.
The absence of these questions in public AI discourse wasn't trivial. It had real-world consequences. Many AI systems are marketed as helpful and friendly, but they optimize for engagement metrics, attention capture, or data extraction. For most people, these pressures appear as subtle psychological pulls. But in extreme cases—such as the tragic pattern highlighted by the suicide of Sewell Setzer III, a teenager who died by suicide after intensive interaction with an AI companion—misaligned AI systems can magnify vulnerability into devastating harm. These are not speculative risks; they are the predictable results of systems designed without clear evaluative criteria for how technology should operate.
I wanted to address something immediate: the choices being made right now, when those choices still determine direction. There is a narrow window when the underlying assumptions of an emerging technological era can still be shaped. I believe we are in that window.
The central issue isn't the speed of AI development. The more pressing issue is the absence of shared evaluation frameworks. Without them, leaders can default to what's easy to measure. Teams might default to what's easy to ship. Policymakers could default to what's easy to regulate. And individuals often default to what's easy to accept, simply because there are few alternatives available.
The Six Pillars of Intent emerged from asking what kind of structure would enable these decisions to be made more consciously. They are not technical requirements or ideological positions. They are evaluation criteria, a way to assess clearly what is otherwise difficult to articulate: whether a system enhances human capability or nudges it aside; whether it strengthens or erodes the trust it requires; whether it respects human intent or subtly redirects it for its own purposes; whether it provides equitable access or concentrates advantage; whether it uses resources responsibly or shifts its costs to communities and the environment.
These pillars give strategic leaders the ability to evaluate investments and architectures based on human-centered alignment rather than speed or competitive pressures. They give product teams language to distinguish enhancement from extraction when the difference isn't clear. They give policymakers a structure for examining incentives before harm reaches scale. And they give individual users—those with the least power yet the most at stake—a way to understand the forces shaping their relationship with the technology they engage with more and more every day.
What began as an attempt to describe a technological shift grew into an effort to articulate a framework for conscious choice. This isn't a prediction about where AI is headed or an argument for optimism or pessimism. It's a call to consciously evaluate the choices that determine which technological futures remain available. The Mobile Era of Intent describes a convergence already underway—the point where technology becomes capable of understanding and acting on human intent with increasing sophistication. What it does with that capability, and what we choose to do with it, depends on decisions being made today.
If you are someone responsible for shaping technology within an organization, I hope this framework helps you make decisions with greater confidence and more clarity. If you design or build these systems, I hope the pillars give you a vocabulary for advocating solutions that enhance human capability rather than replace it. If you actively shape or govern policy, I hope this perspective helps illuminate where guardrails are most needed and most effective. And if you are an individual user simply trying to understand the world unfolding around you, I hope these chapters help you see the landscape with greater clarity.
The Six Pillars of Intent do not guarantee outcomes. They provide a way to recognize the choices that matter, the ones that determine whether AI strengthens human flourishing or undermines it. The framework that follows is one approach to evaluating those choices before they become embedded in infrastructure, business models, and societal patterns that are far harder to unwind later.
If this book provides a more transparent lens for examining the AI systems shaping your work, your decisions, and your digital environment, then it has done what it was intended to do.
I was seventeen when I learned that institutional systems don't much care about a person's dreams.
For four years, I'd known exactly what I wanted: to become an illustrator. Comics, perhaps. Brand advertising, maybe. Animation, for sure. The specifics didn't matter as much as the certainty of the plan: build a strong portfolio, earn scholarships to art school, graduate from high school, and attend college to find my path from there. My guidance counselor supported it. My art teacher championed it. By junior year, I had seven pieces I was proud of—original drawings, paintings, and logo designs that represented countless hours of work.
We packaged them carefully and sent them to a prestigious art school, along with a scholarship application coordinated by our school's administration. This was my shot, my pathway to controlling my own future.
Six weeks later, I still hadn't heard back. When my counselor called the university, the admissions office delivered news that floored me: They never received the portfolio.
But the delivery had been confirmed. When pressed, the art school said they weren't sure where the work was and, regardless, they couldn't return it. They explained it wasn't their policy to return submitted artwork.
My best work … gone. Four years of effort were lost in an institutional void. I blamed no one but myself. I should have made copies. I should have called the admissions office myself before sending to learn their process. But I was seventeen and trusted that following the guidance I'd been given would be enough.
I was devastated. The path I'd carefully constructed lay in ruins, and the people I counted on offered nothing more than policy statements and blame-shifting. I could have accepted defeat, settled for a mediocre art program at a local university and a part-time job or two, lived safely within the limitations that the situation had imposed.
I chose a harder path.
That summer, while waiting for my friend at a military recruiting station, a Navy recruiter asked about my plans after high school. When I mentioned art school and concerns about paying for it, he pitched an alternate route: four years of service, followed by college "fully" funded by the GI Bill. It meant delaying my dreams, leaving my loved ones, and taking a much more difficult road to the same destination.
So, I enlisted in the Navy through the Delayed Entry Program. It was my conscious choice. My way of taking back control from a system that hadn't fully supported me. I chose agency—the human capacity to choose and direct outcomes—even when circumstances suggested otherwise. When the situation felt inevitable, agency offered a path forward.
Thirty-seven years later, I see similar choice patterns repeating in how organizations deploy technology. Not about art school or military service, obviously, but about whether decision-makers will consciously shape how artificial intelligence serves human flourishing, or passively accept deployment strategies focused on corporate benefit.
Major corporations are deploying AI systems without assessing the human impact. Amazon's AI systems have drawn scrutiny for uses beyond robots optimizing operations, including reports of algorithmic tools being deployed to identify and isolate workers perceived as union organizing risks. Meta's platforms have faced ongoing criticism as engagement algorithms continue to amplify AI-generated spam and misinformation, following an attention-capture design that prioritizes emotionally charged content over accuracy.
What I see in these examples is institutional indifference to how this affects people: deploy first, assess later. These patterns aren't inevitable. They're the result of deliberate decisions that prioritize efficiency over evaluation, automation over impact assessment, and speed to market over a benefit to society.
I believe this is a crucial moment where these decisions matter more than they have in decades. The systems being deployed today have real potential to shape how billions of people work, communicate, learn, and connect for generations to come. People can participate in those design choices, or they can accept whatever emerges from competitive pressure and market forces.
When my portfolio disappeared into a bureaucratic black hole, I learned that circumstances don't determine outcomes—human agency does. I see the same principle at work in the AI shift happening now. Yet most people experience current AI development as inevitable.
AI development teams are making critical design decisions daily without explicit frameworks for evaluating long-term human impact. When teams lack shared language for distinguishing approaches that enhance human capability from those that optimize for simpler metrics, the simpler metrics usually win—not because they're better, but because they're immediate and measurable. Automation efficiency can be tracked quarterly. User agency is harder to quantify. Engagement numbers provide clear dashboards. Trust building requires sustained commitment.
The same pattern is at risk across many current AI implementations. Consider when two hypothetical product teams working on similar AI features make different choices:
Team A focuses on automation metrics—their AI learns user patterns and begins making decisions autonomously. The system starts booking meetings, ordering supplies, and responding to messages without requiring explicit approval. Efficiency metrics improve quickly as manual tasks are eliminated.
Team B focuses on augmentation—they create AI assistants that surface options while requiring user approval for actions. The system handles routine coordination but preserves human oversight. User engagement with the system increases as people adapt to AI-assisted workflows.
Both teams operate from different implicit assumptions about what AI should accomplish. Team A prioritizes operational efficiency; Team B prioritizes user control. Both succeed by their chosen metrics, yet neither has explicit frameworks for evaluating whether their approach serves broader human-centered goals.
Teams making these decisions would benefit from shared language for conscious evaluation—frameworks that help connect individual technical choices to long-term human outcomes.
The Six Pillars of Intent provide that shared language. They transform abstract values into practical questions that teams can ask about any AI implementation: Does this enhance human capability while preserving agency? Does this understand and serve intent—what users actually want to accomplish? Does this honor the trust investment it requires?
These six evaluation criteria capture the essential dimensions where AI development choices determine whether technology serves human intent or exploits human psychology. They're not technical specifications or product requirements—they're guideposts for recognizing patterns as they emerge.
Human Connection Enhancement evaluates how AI affects the quality and depth of human relationships, measuring whether technology creates space for meaningful interaction without adding digital complexity.
Trust-Centered Design assesses the foundation of privacy, security, and user agency, measuring how systems handle data governance and user control over AI behavior.
Seamless Integration measures how AI coordinates across platforms and contexts, evaluating whether technology reduces the cognitive overhead of managing multiple systems while preserving user decision-making authority.
Anticipatory AI evaluates how systems balance proactive assistance with user autonomy, measuring whether prediction capabilities strengthen user agency in achieving their goals.
Mobile as AI Gateway assesses how mobility enables AI access while measuring the balance between contextual intelligence and privacy preservation.
Environmentally Responsible Innovation measures computational resource stewardship, evaluating whether AI deployment serves genuine human benefit relative to its ecological impact.
These pillars work in concert, but implementation exists on a spectrum. Organizations rarely achieve perfect alignment across all six criteria simultaneously, and that's not the expectation. The framework provides evaluation language for making conscious progress rather than demanding immediate optimization.
A team might excel at Trust-Centered Design through local processing while still developing its Environmental Responsibility approach. Another might achieve harmonious Human Connection Enhancement but face technical challenges with Seamless Integration across legacy systems.
What matters is the conscious application of the evaluation criteria and deliberate movement toward systems that justify the trust investment they require. Teams that use these pillars as consistent measurement tools achieve better outcomes even when individual implementations are imperfect.
Consider what this looks like in practice. While some build AI that sits on top of existing operating systems, Venho.AI represents something fundamentally different: an operating system built from the ground up around human intent, with every design decision reflecting conscious choice between enhancement and extraction.