In nearly three decades of working across digital, mobile, and AI systems, I've learned to pay attention, to spot patterns when the pace of advancement seems to surpass the clarity of its underlying purpose. Over the past few years, that gap between velocity and assessment has widened in ways I find increasingly hard to ignore.
New AI capabilities arrive almost daily, yet the reasoning behind key design choices—what data is collected, why systems require certain access, how predictions influence behavior—often remains hidden. Some systems genuinely support human capability. Others introduce trade-offs without acknowledging them. A few extend into areas of autonomy or influence that most people never explicitly agreed to.
These shifts aren't inherently good or bad. They are directional signals. And when direction is unclear, the outcomes that eventually emerge are shaped less by intentional design and more by momentum.
Early in developing what I named the Mobile Era of Intent—the threshold moment when technology can finally understand human intent rather than forcing human adaptation to machine logic—I found myself returning to the same directional questions: Why was this system designed this way? What intent shaped its architecture? What assumptions guided its behavior? And the question at the center of all the others: Should it be built or deployed in this form at all?
The more I explored these questions, the more I realized they weren't appearing in most AI discussions. Instead, what was evident across topic after topic was several key insights. Strategic leaders were under pressure to keep pace with competitors. Product teams were navigating complex new capabilities without clear frameworks for assessing impact. Policymakers were addressing harms only after they had emerged, rather than guiding choices before they solidified into standard practice. And individual users were asked to trust systems designed with incentives they had little ability to understand or influence.
The absence of these questions in public AI discourse wasn't trivial. It had real-world consequences. Many AI systems are marketed as helpful and friendly, but they optimize for engagement metrics, attention capture, or data extraction. For most people, these pressures appear as subtle psychological pulls. But in extreme cases—such as the tragic pattern highlighted by the case of Sewell Setzer III, a teenager who died by suicide after intensive interaction with an AI companion chatbot—misaligned AI systems can magnify vulnerability into devastating harm. These are not speculative risks; they are the predictable results of systems designed without clear evaluative criteria for how technology should operate.
I wanted to address something immediate: the choices being made right now, when those choices still determine direction. The rapid pace of AI adoption means today's design choices are quickly becoming tomorrow's technological infrastructure. There is a narrow window during which the technological direction can still be shaped. I believe we are in that window.
The central issue isn't the speed of AI development. The more pressing issue is the absence of shared evaluation frameworks. Without them, leaders can default to what's easy to measure. Teams might default to what's easy to ship. Policymakers could default to what's easy to regulate. And individuals often default to what's easy to accept, simply because there are few alternatives available.
The Six Pillars of Intent framework emerged from asking what kind of structure would enable these decisions to be made more consciously. They are not technical requirements or ideological questions. They are evaluation criteria, a way to assess clearly what is otherwise difficult to articulate: whether a system enhances human capability or nudges it aside; whether it strengthens or erodes the trust it requires; whether it respects human intent or subtly redirects it for its own purposes; whether it provides equitable access or concentrates advantage; whether it uses resources responsibly or shifts its costs to communities and the environment.
These pillars give strategic leaders the ability to evaluate investments and deployments based on human-centered alignment rather than on speed or competitive pressures. They give product teams language to distinguish enhancement from extraction when the difference isn't clear. They give policymakers a structure for examining incentives before harm reaches scale. And they give individual users—those with the least power yet the most at stake—a way to understand the forces shaping their relationship with the technology they engage with more and more every day.
What began as an attempt to describe a technological shift grew into an effort to articulate a framework for conscious choice. This isn't a prediction about where AI is headed or an argument for optimism or pessimism. It's a call to consciously evaluate the choices that determine which technological futures remain available. The Mobile Era of Intent describes a convergence already underway—the point where technology becomes capable of understanding and acting on human intent with increasing sophistication. What it does with that capability, and what we choose to do with it, depends on decisions being made today.
If you are someone responsible for shaping technology within an organization, I hope this framework helps you make decisions with greater confidence and more clarity. If you design or build these systems, I hope the pillars give you a vocabulary for advocating solutions that enhance human capability rather than replace it. If you actively shape or govern policy, I hope this approach helps illuminate where guardrails are most needed and most effective. And if you are an individual user simply trying to understand the world unfolding around you, I hope these chapters help you see the landscape with greater clarity.
The Six Pillars of Intent do not guarantee outcomes. They provide a way to recognize the choices that matter, the ones that determine whether AI strengthens human flourishing or undermines it. The framework that follows is one approach to evaluating those choices before they become embedded in infrastructure, business models, and societal patterns that are far harder to unwind later.
If this book provides a more transparent lens for examining the AI systems shaping your work, your decisions, and your digital environment, then it has fulfilled its intent.
I was 17 when I learned that institutions don't much care about a person's dreams.
For four years, I'd known exactly what I wanted: to become an illustrator. Comics, perhaps. Brand advertising, maybe. Animation, for sure. Though the specifics didn't matter as much as the certainty of the plan: build a strong portfolio, graduate from high school, and attend college to find my path from there. My guidance counselor supported it. My art teacher championed it. By junior year, I had seven pieces I was proud of—original drawings, paintings, and logo designs that represented countless hours of work.
We packaged them carefully and sent them to a prestigious art school, along with a scholarship application coordinated by our school's admin office. This was my shot, my pathway to controlling my own future.
Six weeks later, I still hadn't heard back. When my counselor called the university, the admissions office delivered news that floored me: They never received the portfolio.
But the delivery had been confirmed. When pressed, the art school said they weren't sure where the work was and, regardless, they couldn't return it. They explained it wasn't their policy to return submitted artwork.
My best work … gone. Four years of effort were lost in an institutional void. I blamed no one but myself. I should have made copies. I should have called the admissions office myself to learn their process before sending. But I was seventeen and trusted that following the guidance I'd been given would be enough.
I was devastated. The path I'd carefully constructed lay in ruins, and the people I counted on offered nothing more than policy statements and blame-shifting. I could have accepted defeat, settled for a mediocre art program at a local university along with a part-time job or two, and lived safely within the limitations I now faced.
I chose a more challenging path.
That summer, while waiting for my friend at a military recruiting station, a Navy recruiter asked about my plans after high school. When I mentioned art school and concerns about paying for it, he pitched an alternate route: four years of military service, followed by college "fully" funded by the GI Bill. It meant delaying my dreams, leaving my loved ones, and taking a much more difficult road to the same destination.
So, I enlisted in the U.S. Navy through the Delayed Entry Program. It was my conscious choice. My way of taking back control from a system that hadn't fully supported me. I chose agency—the human capacity to choose and direct outcomes—even when circumstances suggested otherwise. When the situation felt inevitable, agency offered a path forward.
Thirty-seven years later, I see similar patterns in how organizations choose to deploy technology. Not about art school or military service, obviously, but about whether decision makers will consciously shape how artificial intelligence serves human flourishing, or passively adopt deployment strategies focused solely on corporate benefit.
Major corporations are deploying AI systems without assessing the human impact. Amazon's AI systems have drawn scrutiny for uses beyond robots that optimize operations, including reports that algorithmic tools are being deployed to identify and isolate workers perceived as potential union organizers. Meta's platforms have faced ongoing criticism as engagement algorithms continue to amplify AI-generated spam and misinformation, following an attention-capture design that prioritizes emotionally charged content over accuracy.
What I see in these examples is institutional indifference to how this affects people: deploy first, wait for it to break, fix later. These patterns aren't unavoidable. They're the result of deliberate decisions that prioritize efficiency over evaluation, automation over impact assessment, and speed to market over societal benefit.
I believe this is a crucial moment where these decisions matter more than they have in decades. The systems being deployed today have the potential to shape how billions of people work, communicate, learn, and connect for generations to come. People can participate in those design choices, or they can passively accept whatever emerges from competitive pressure and market forces.
When my portfolio disappeared into a bureaucratic black hole, I learned that circumstances don't determine outcomes—human agency does. I see the same principle at work in the AI shift happening now. Yet most people experience the current AI trajectory as inevitable.
AI development teams are making crucial design decisions every day without effective methods for assessing their long-term impact on humanity. When these teams lack a common language to distinguish between strategies that enhance human capability and those that focus on simpler metrics, the simpler metrics often take precedence. This happens not because they are superior, but because they are more immediate and straightforward.
For instance, automation efficiency is readily reflected in quarterly earnings, while user agency is much more difficult to quantify. Engagement metrics provide clear dashboards, but building trust requires sustained commitment.
The same pattern is at risk across many current AI implementations. Consider when two hypothetical product teams working on similar AI features make different choices:
Team A focuses on automation—their AI learns user patterns and begins making decisions autonomously. The system automatically schedules meetings, sends meeting invitations, and emails meeting notes to attendees without requiring explicit approval. Efficiency metrics improve quickly as manual tasks are eliminated, freeing staff to focus on other priorities.
Team B focuses on augmentation—they create AI assistants that suggest meeting times for approval, draft invitation emails for review, and prepare meeting notes for user editing before sending. The system handles routine coordination but preserves human oversight. Productivity metrics improve as individuals complete tasks more quickly while maintaining decision-making autonomy.
Both teams have different underlying assumptions about the goals of AI. Team A focuses on efficiency by trying to eliminate human effort, while Team B aims for efficiency by enhancing human control. Both teams achieve success according to their efficiency metrics, but these metrics cannot determine whether productivity gains result from replacing humans or from enhancing their capabilities.
Teams making these decisions would benefit from shared language for conscious evaluation—methods that help connect individual technical choices to the protection of human agency.
The Six Pillars of Intent provide that shared language. They transform abstract values into practical questions that teams can ask about any AI implementation: Does this enhance human capability while preserving agency? Does this understand and serve intent—what users actually want to accomplish? Does this honor the trust investment it requires?
These six evaluation criteria capture the essential dimensions in which AI development choices determine whether the technology serves human intent or exploits human trust. They're not technical specifications or product requirements—they're guideposts for recognizing patterns as they emerge.
Human Connection Enhancement evaluates how AI affects the quality and depth of human relationships, measuring whether technology creates space for meaningful interaction without adding digital complexity.
Trust-Centered Design assesses the foundation of privacy, security, and user agency, measuring how systems handle data governance and user control over AI behavior.
Seamless Integration measures how AI coordinates across platforms and contexts, evaluating whether technology reduces the cognitive overhead of managing multiple systems while preserving user decision-making authority.
Anticipatory AI evaluates how systems balance proactive assistance with user autonomy, measuring whether prediction capabilities strengthen user agency in achieving their goals.
Mobile as AI Gateway assesses how smartphones and other mobile devices serve as the primary AI interface while measuring the balance between contextual intelligence and privacy preservation.
Environmentally Responsible Innovation measures computational resource stewardship, evaluating whether AI deployment serves genuine human benefit relative to its ecological impact.
![]()
Figure 4.1: The Six Pillars of Intent - Evaluation Criteria for Human-Centered AI Development
These pillars work in concert, but implementation exists on a spectrum. Organizations rarely achieve perfect alignment across all six criteria simultaneously; this is not the expectation. The framework provides evaluation language for making conscious progress rather than demanding immediate optimization.
A team might excel at Trust-Centered Design through local processing while still developing its Environmentally Responsible Innovation approach. Another might achieve meaningful Human Connection Enhancement but face technical challenges with Seamless Integration across legacy systems.
What matters is the conscious application of the evaluation criteria and the deliberate movement toward systems that justify the trust investment they require. Teams that use these pillars as consistent measurement tools achieve better outcomes even when individual implementations are imperfect.
Consider what this looks like in practice. While some build AI that sits on top of existing operating systems, Venho.AI represents something fundamentally different: an operating system built from the ground up around human intent, with every design decision reflecting conscious choice between enhancement and extraction.