Pre-Sales Architect · Regulated Industries · Builder

Architecting AI adoption for the
most complex enterprises.

Director of Solutions and Technical Architects at Salesforce. FSI, Healthcare. Hands-on across 200+ deal cycles, from use case discovery to production architecture.

Pre-Sales Architecture Data Cloud / Data 360 Agentforce / Agentic AI LLM Trust Layers FSI & Healthcare Data Federation (Databricks, Snowflake, BigQuery, Redshift)
$97M+
ACV directly contributed (3 yrs)
200+
Deal cycles supported
60+
Exec-level technical presentations
500+
SEs and architects enabled
CREDENTIALS MIT · Applied Agentic AI Yale · Digital Transformation Databricks · Lakehouse, GenAI, Spark
Enterprise Engagements

Complex deployments in regulated environments.

Selected engagements where I was personally involved in technical discovery, architecture design, and deployment planning.

Healthcare / Insurance
Fortune 10 healthcare company spanning clinical, pharmacy benefit, and insurance business units. Three subsidiaries, each with their own tech stack and data governance model.
10+ use cases identified

Agentic AI Across Clinical, Pharmacy, and Insurance

I led a two-day on-site discovery workshop that brought together stakeholders from three subsidiaries. We used a structured use case framework to identify and prioritize over 10 use cases: chat summarization for service reps, IVR agent orchestration connecting to legacy claims systems, member-facing agents for routine inquiries, scheduling automation, and a low-cost drug recommendation engine. After the workshop, I reviewed the current-state architecture and designed the future-state approach, mapping each use case to the right mix of Data 360, Agentforce, and RAG capabilities.
My role: Led the workshop. Built the use case prioritization framework. Reviewed and validated architecture across all three business units.
Financial Services / Banking
Top 4 U.S. bank. Multiple lines of business. Highly technical engagement with engineering and security teams.
Deep Technical Architecture

Enterprise Data Platform and Trust Architecture

This was my most technically detailed engagement. I walked their engineering teams through data ingestion patterns (APIs, SDKs, batch and streaming), data harmonization, and identity resolution across multiple systems. I spent significant time on the zero-copy architecture with Databricks: how data can be queried in place without duplication, how federation works across platforms, and how to maintain governance across the boundary. On the AI side, I went deep on the LLM trust layer: where data masking happens in the pipeline, how grounding works, how prompts are stored and executed, how unstructured data flows into the vector database, and the full LLM gateway architecture. Their security team had specific questions about audit trails, prompt injection defenses, and data lineage, all of which I addressed directly.
My role: Primary Technical Architect. Led deep-dive sessions with engineering and security teams.
Financial Services / Wealth
Fortune 20 bank, 7 lines of business, 5 separate CRM instances including private banking and wealth advisory.
10M+ annual referrals

Global Referrals Engine and Advisor 360

Two parallel workstreams at one of the largest U.S. banks. The first: a Global Referrals Engine. The bank processes roughly 10 million referrals a year across 7 lines of business, with a 30% conversion rate they wanted to improve significantly. No system unified client records across all LOBs. I designed a Data Cloud architecture to harmonize disparate client IDs into a single holistic profile, enabling AI-powered discovery against customer whitespace. The second initiative was a Wealth Advisor 360 view: unifying data from 5 separate orgs (private bank, business bank, consumer banking, small business, wealth advisory) into one client profile using identity resolution, data graphs, and Agentforce-powered summarization.
My role: Executive architect. Led discovery, designed the unified data architecture, drove phased deployment strategy. Navigated banking security and compliance requirements throughout.
Technical Patterns

Transferable to GPT deployments.

The problems I solve at Salesforce are the same ones enterprise customers will face when deploying GPT at scale.

🛡

Trust Layer and Safety Architecture

I've designed and presented LLM trust architectures to CISOs at major banks: data masking, grounding, prompt injection defenses, audit trails. These are the same conversations OpenAI SAs will have about GPT's safety guarantees with regulated enterprises.

Enterprise Data Federation

Zero-copy with Databricks and Snowflake, federation with BigQuery and Redshift, API/SDK ingestion, identity resolution, data harmonization across 5 to 7 separate orgs. Recognized internally as the Data Federation SME across all major data platforms.

🤖

Agentic Workflow Design

Architected both autonomous and assistive agents across healthcare (IVR orchestration, member-facing agents, drug recommendations) and financial services (relationship manager copilots, referral automation). From prompt design to action frameworks to production guardrails.

📐

Use Case Discovery and Prioritization

Built a repeatable framework for identifying, scoping, and prioritizing AI use cases with enterprise customers. In a two-day workshop this framework typically produces 10 to 15 qualified use cases with clear mapping from pain points to KPIs to architecture.

Multi-Level Architecture Diagram Framework

I built a framework for presenting Technical Architecture at different levels of depth depending on who is in the room. This has been adopted across Solutions Engineering and Technical Architecture at Salesforce. Portions of the content are published on architect.salesforce.com.
LEVEL 1
Executive View

Business capabilities and value streams. For C-suite and VP-level stakeholders.

LEVEL 2
Solution View

Product mapping, integration points. For solution architects and IT directors.

LEVEL 3
Technical View

Data flows, APIs, system interactions. For engineering leads and architects.

LEVEL 4
Implementation View

Object models, field mappings, config details. For developers and admins.

I standardized C4 diagrams for Agentforce and Data360, then created the methodology for the full L1 through L4 hierarchy. This framework is used for customer-facing architecture discussions across all Salesforce verticals today. Some of the resulting data models and diagram standards are publicly available at architect.salesforce.com/diagrams.
Go-to-Market at Scale

I built the workshop engine from scratch.

In 2025, I designed and led Salesforce's global one-to-many workshop motion for Agentforce and Data Cloud.

Global Workshop Motion: Agentforce and Data Cloud

48
Workshops globally
$60M+
Registered open pipe
$42M
Attended open pipe
120
New opportunities created
~1,000
Customers engaged
67%
Manager+ title attendees
4
Industry verticals
4 regions
AMER, EMEA, APAC, LATAM
Industry-specific, half-day workshops for both Data 360 and Agentforce. Account Executives nominated their customers, and we brought groups into Salesforce hubs globally. The core of each session was a use case discovery framework I built: customers worked through their own pain points, KPIs, and business goals in the room with us. We helped them define sales, service, and marketing use cases mapped to their specific priorities, then prioritized a follow-up list for custom demos, deeper technical sessions, or POCs. 67% of attendees were manager-level or above, which meant the conversations went straight to decision-makers.

What I built

The full asset kit: workshop content across 4 industry verticals, use case discovery templates, facilitator guides, HTML slide decks, printed worksheets, and follow-up templates including basho pages and Slack workflows.

How I ran it

Slack workflows for pre-workshop coordination, mandatory dry runs with recorded video walkthroughs, real-time question capture during sessions, and structured feedback routed directly to Account Executives for follow-up.

Why it worked

Everything was systematized. Nothing depended on one person's knowledge. The SEs and architects had clear playbooks, and the feedback loop ensured continuous improvement from session to session.

Builder

I build things. Not just decks.

Outside of the day job, I spend evenings and weekends building AI tools, writing about enterprise AI, and experimenting with new development workflows.

Dreamforce
2024
Data Cloud Extensibility
Snowflake Summit
2023
Snowflake + Salesforce
AWS re:Invent
2023
Salesforce + AWS
Connections
2023
BigQuery + Data Cloud Federation
Internal
2024-25
1,000+ trained across 4 verticals
TrustEval compliance assessment report

TrustEval

NEW
Python · OpenAI API · Model-agnostic · LLM-as-Judge

A compliance evaluation framework for regulated industries. Before a bank deploys an LLM, they need evidence that it handles their compliance scenarios correctly. TrustEval tests any model against realistic FSI scenarios (KYC/AML, data privacy, sanctions, insider trading, audit governance), uses GPT-4o as a consistent judge, and generates a report that a CISO or risk committee can review.

The gap it fills: Existing eval tools (LangSmith, Braintrust, Galileo) are built for ML engineers measuring prompt quality. Nobody has built the compliance evaluation center for the regulated enterprise persona. TrustEval produces reports in the language of compliance teams, not developers. It is model-agnostic, so customers can compare GPT-4o against Claude on the same test cases with a consistent judge.

The SA use case: During technical discovery, the SA works with the customer's compliance team to define test cases, configure system prompts reflecting their policies, run the eval, and iterate on prompt design until the pass rate meets their threshold. Then it becomes part of the customer's ongoing AI governance process, re-run after every model upgrade or policy change.

github.com/aulakhs/trust-eval →
VoiceInk architecture diagram

VoiceInk

macOS · Swift · NVIDIA Parakeet · Apple Silicon

A native macOS speech-to-text app that wraps NVIDIA's Parakeet model and runs locally from the menu bar. I built it because I wanted fast, private transcription without sending audio to the cloud. I use it every day.

Personal Knowledge Base README

Personal Knowledge Base

Markdown · Vendor-agnostic · Self-bootstrapping

A structured local knowledge system designed so any AI assistant can read it and understand my full context. Vendor-agnostic (works with Claude, GPT, Gemini), fully local, and continuously updated. It is the single source of truth for all my AI workflows.

github.com/aulakhs →
~/projects/trust-eval
cat CLAUDE.md
# TrustEval
## Compliance eval framework for FSI

## Architecture
- providers/ → Model-agnostic LLM interface
- eval/ → Runner, scorer, report generator
- test_cases/ → FSI compliance scenarios

## Run
python run_eval.py --provider openai

AI-Assisted Development Workflow

Structured specs · Planning mode · Symlinked repos

A methodology I developed for working with AI coding assistants: write structured specification files first, then use planning mode to reason through the implementation. Not autocomplete. A reasoning partner for building software.

Published Writing

aulakhs.github.io · Enterprise AI

I write about enterprise AI adoption, agentic systems, vendor risk, and what actually works vs. what demos well. Topics include multi-model strategies for enterprise buyers and how AI changed my daily workflow as a pre-sales leader.

aulakhs.github.io →
Why This Role. Why Now.

The playbook is being written.
I want to help write it.

"At Salesforce, I helped build the enterprise AI go-to-market from scratch. OpenAI is at that same inflection point, but for something more foundational."
01

Values alignment through experience, not just conviction. At Salesforce, trust is the number one value. In regulated industries, it's not abstract. It's PII handling, data residency, audit trails, and explaining to a CISO exactly where their data goes. I've been operationalizing trust for over a decade. What drew me to OpenAI is the mission to ensure AGI benefits all of humanity, combined with the pragmatic approach of shipping products that create real value today. That alignment matters to me because it's what my customers need.

02

I've built zero-to-one GTM motions before. OpenAI's enterprise SA motion is scaling rapidly, and that's exactly the kind of environment I do my best work in. The Agentforce and Data Cloud workshop program didn't exist when I started. I built the curriculum, the delivery framework, the engagement model, and the follow-up workflows. I scaled it to 48 workshops, $60M+ in pipeline, across 4 regions. That's the kind of building OpenAI needs right now.

03

Ready to hit the ground running. The customer persona is the same: regulated enterprise buyers, CISOs, compliance teams, engineering leads. The conversation is the same: trust, safety, data governance, architecture. The sales motion is the same: pre-sales, technical discovery, architecture design, POC, deployment. The product changes. The muscle doesn't. I've had these conversations hundreds of times at Fortune 10 companies, and the transition to OpenAI's product set is a natural extension of work I already do every day.

04

I'm already building with the product. TrustEval, VoiceInk, ChatGPT, my Personal Knowledge Base. I'm not someone who needs to be convinced to use the product or onboarded onto the API. I'm already building tools that could be used in actual customer engagements. That's a signal that I'll create leverage for the team, not just consume it.

How I'd approach it
Enterprise AI adoption in regulated industries
Discover
Map compliance landscape and regulatory constraints. Identify AI use cases tied to business pain points. Understand the decision chain: CISO, compliance, legal, engineering.
Architect
Design GPT integration within existing stack. Define trust layer: data handling, guardrails, audit trails. Architecture documentation using C4 standards tailored to each stakeholder level.
Prove value
Run compliance eval (TrustEval) against customer scenarios. Model comparison with evidence for risk committee. Execute scoped POC on highest-priority use case.
Expand
Document wins, create reusable playbooks. Identify adjacent use cases across LOBs. Establish recurring eval cadence tied to model upgrades.
Cross-functional enablers
GTM · Product · Applied AI · Partnerships