Week 1: The AI Multiplier (MCP + Skills) Week 2: Making AI Actually Work (CLAUDE.md) Week 3: Real Stories, Real Results Week 4: What Could You Build? (Interactive) Skeleton decks in presenterm format. PLAN.md has full series architecture, structure, speaker notes, and open decisions.
249 lines
14 KiB
Markdown
249 lines
14 KiB
Markdown
# AI Initiative — Friday All-Hands Series
|
|
|
|
**4 weeks, 1 hour each, APAC all-hands**
|
|
**Presenter:** Conan Scott, Director of Services APAC
|
|
**Audience:** Mixed — Services team (primary target), presales, support, management. Technical range from "what's a prompt" to Dhruv/Saori-level builders.
|
|
|
|
---
|
|
|
|
## Series Arc
|
|
|
|
The four sessions follow a deliberate arc: **Wow → How → Deep → What If**
|
|
|
|
| Week | Date | Title | Goal |
|
|
|------|------|-------|------|
|
|
| 1 | Apr 4 | **The AI Multiplier: MCP + Skills in Action** | Hook them. Show what's possible. Create desire. |
|
|
| 2 | Apr 11 | **Making AI Actually Work: CLAUDE.md and the Art of Instruction** | Practical takeaway. Everyone walks out knowing how to be better at using AI *today*. |
|
|
| 3 | Apr 18 | **Real Stories, Real Results** | Credibility. Axway people, Axway problems, AI solutions. Named stories + industry context. |
|
|
| 4 | Apr 25 | **What Could You Build? (Interactive)** | Engagement. Solicit ideas. Surface hidden use cases. Plant seeds for after paternity. |
|
|
|
|
---
|
|
|
|
## Week 1: The AI Multiplier — MCP + Skills in Action
|
|
**Friday April 4 · 1 hour**
|
|
|
|
### Intent
|
|
This is the hook. The audience should leave thinking "I want that." Not "that's interesting" — "I want that."
|
|
|
|
### Structure (60 min)
|
|
|
|
**Part 1: The Problem (10 min)**
|
|
- The copy-paste loop (repurpose the existing "Without MCP" sequence diagram — grandpa Simpson energy)
|
|
- Why people "bounce off" AI for real work — the frustration cycle
|
|
- Quick reframe: the problem isn't AI, it's the interface between AI and your systems
|
|
|
|
**Part 2: MCP — Giving AI Hands (10 min)**
|
|
- What is MCP (keep it tight — open standard, created by Anthropic, gives tools to AI)
|
|
- The flip: you stop being the middleware
|
|
- Show the tool ecosystem briefly: OCP, Git, ArgoCD, databases, anything with an API
|
|
|
|
**Part 3: Skills — Giving AI Expertise (10 min)**
|
|
- The lazy-loading concept (repurpose existing skills deck — the token cost diagram)
|
|
- Skills = SOPs for AI. Your best engineer writes it once, every agent benefits forever.
|
|
- Quick examples: weather (simple), healthcheck (structured), ST flow engineering (deep domain)
|
|
- **Key point for the room:** "People leave. Skills persist." — this lands hard with the institutional knowledge crowd (the Bills and Gills)
|
|
|
|
**Part 4: Live Demo (25 min)**
|
|
- **Demo 1 — The Cluster Health Check (~8 min)**
|
|
AI investigates an OpenShift cluster via MCP. Produces insight, not just data. Show it reasoning across multiple tools.
|
|
*While AI works:* Conan does something else visibly. The multiplier point is made without saying it.
|
|
|
|
- **Demo 2 — A Skill in Action (~8 min)**
|
|
Trigger a skill-driven task. Show the difference between "wing it" and "follow the SOP."
|
|
[PLACEHOLDER: Decide which skill — healthcheck is clean, ST flow engineering is impressive but niche. Healthcheck is probably better for mixed audience.]
|
|
|
|
- **Demo 3 — The "Figure It Out" Moment (~9 min)**
|
|
Something that requires the AI to adapt — discover tools, chain them, handle an unexpected result. The "holy shit" moment.
|
|
[PLACEHOLDER: Decide scenario. The cross-platform Apple Notes demo from SCRATCHPAD is good — container → Mac node → discovers AppleScript. Or: "fix this broken deployment" end-to-end.]
|
|
|
|
**Part 5: Close (5 min)**
|
|
- "What you didn't see" — all the commands, iterations, errors handled silently
|
|
- The 3x multiplier framing: parallel work + additive AI contribution
|
|
- Tease next week: "Now I'll show you how to get this yourself"
|
|
|
|
### Speaker Notes
|
|
- During demos, narrate what the AI is doing and why, but don't over-explain. Let the audience see the magic.
|
|
- If a demo takes longer than expected (AI things), that's actually fine — the audience is watching it think. Use the time.
|
|
- Audience question time is flexible — can cut from Part 2 or 3 if demos run long.
|
|
|
|
---
|
|
|
|
## Week 2: Making AI Actually Work — CLAUDE.md and the Art of Instruction
|
|
**Friday April 11 · 1 hour**
|
|
|
|
### Intent
|
|
Practical, actionable, "I can do this Monday morning." The audience should leave with a clear mental model of how to communicate with AI effectively — and specifically, how CLAUDE.md / system-level instructions change everything.
|
|
|
|
### Structure (60 min)
|
|
|
|
**Part 1: Why AI Disappoints People (10 min)**
|
|
- The Copilot gap — "would you like me to help you think about..." vs just doing it
|
|
- The "AI doesn't work" fallacy — bad tool ≠ bad category
|
|
- Root cause: people are talking to AI like a search engine, or like a very literal junior employee
|
|
- Reframe: AI is a capable colleague who just started today — they need context, not commands
|
|
|
|
**Part 2: The Three Levels of AI Instruction (20 min)**
|
|
- **System level** — CLAUDE.md / system prompts. Who is the AI? What does it know about your org? What are the guardrails?
|
|
- Show a real CLAUDE.md example (sanitised if needed)
|
|
- The difference between "be helpful" and "you are an Axway services consultant who knows our deployment patterns, customer naming conventions, and escalation paths"
|
|
|
|
- **Project level** — Context per project/repo. What are we building? What are the constraints? What's been decided?
|
|
- AGENTS.md pattern: "read this before you do anything"
|
|
- The "new colleague joins the project" analogy — what would you put in their onboarding doc?
|
|
|
|
- **User level** — Personal preferences, communication style, working patterns
|
|
- "I prefer concise answers" vs "walk me through your reasoning"
|
|
- The compound effect: system + project + user = AI that feels like it knows you
|
|
|
|
**Part 3: Intent vs. Instruction (10 min)**
|
|
- Repurpose the existing "Prompt Engineering" slide from deck 01
|
|
- Expand with real examples:
|
|
- Bad: "Check if my server is secure"
|
|
- Better: "Audit SSH configuration against CIS benchmarks, prioritise findings by severity, suggest specific remediation commands"
|
|
- Best: Write a skill for it (callback to Week 1)
|
|
- "Ambiguity is the enemy of automation. If you don't define 'safe'… the AI will."
|
|
|
|
**Part 4: Live Demo (15 min)**
|
|
- **Demo 1 — Before and After CLAUDE.md (~7 min)**
|
|
Same prompt, two contexts. Show the dramatic difference in output quality.
|
|
|
|
- **Demo 2 — Building a CLAUDE.md Live (~8 min)**
|
|
Pick a real Axway project or scenario. Build a CLAUDE.md with the audience. Show it immediately improve AI output.
|
|
[PLACEHOLDER: Need to decide which project/scenario. Something the whole room recognises.]
|
|
|
|
**Part 5: Practical Takeaway + Close (5 min)**
|
|
- Checklist: 3 things you can do this week
|
|
1. Write a CLAUDE.md for one project
|
|
2. Add 3 sentences of personal preference to your AI tool
|
|
3. Next time you're frustrated with AI output, ask "what context was I not giving it?"
|
|
- Tease Week 3: "Next week — real stories from people in this room"
|
|
|
|
---
|
|
|
|
## Week 3: Real Stories, Real Results
|
|
**Friday April 18 · 1 hour**
|
|
|
|
### Intent
|
|
Social proof and credibility. Named stories from Axway people using AI to solve Axway problems. Mixed with broader industry examples to show the trajectory. This is the session that converts skeptics — not through tech demos, but through peer stories.
|
|
|
|
### Structure (60 min)
|
|
|
|
**Part 1: The Bedrock Trial — Early Results (10 min)**
|
|
- 5-person pilot, one month of data by now
|
|
- Usage patterns per persona type (without naming cost figures if sensitive)
|
|
- The reactions: "I CAN NEVER GO BACK TO COPILOT" — Gill, hour one
|
|
- What surprised us
|
|
|
|
**Part 2: Real Use Cases — Our People (25 min)**
|
|
|
|
- **Dhruv's Time Tracker** (~5 min)
|
|
Built a custom local program to efficiently fill the corporate time tracker. $2.50. One time. Previously tried with Copilot 365 — more time fighting the tool than doing the task manually. Day 1 of Claude: specified, built, done.
|
|
*The lesson:* AI that acts vs AI that asks.
|
|
|
|
- **Saori's AI Integration** (~5 min)
|
|
First AI project, complex integration, building it with Claude from day one. Junior hire 2 years ago → now building production AI systems.
|
|
*The lesson:* The tool shapes the builder. Start with a good one.
|
|
|
|
- **Gill's Consulting Superpower** (~5 min)
|
|
"It's like a consulting service." The institutional knowledge amplifier — decades of customer context + AI reasoning = advice that used to take a team and a week.
|
|
*The lesson:* AI doesn't replace expertise, it amplifies it.
|
|
|
|
- **The SecureTransport Story** (~5 min)
|
|
[PLACEHOLDER: Decide how much to share. The ST flow engineering skill, the agent, the RAG journey. This is the "deep domain" example — months of tribal knowledge encoded, instantly available.]
|
|
*The lesson:* Capture knowledge before it walks out the door.
|
|
|
|
- **Other examples from the room** (~5 min)
|
|
Open it up. "Who else has a story?" By week 3, the pilot group will have more. Others might too.
|
|
[NOTE: Seed this in advance — ask Dhruv, Saori, Neo if they want to share something briefly.]
|
|
|
|
**Part 3: Beyond Axway — What's Happening Out There (10 min)**
|
|
- Industry examples of AI transformation in services/consulting orgs
|
|
- The "AI or bust" moment from Paris AI Forum — Axway is institutionally serious now
|
|
- Competitors are moving. Standing still is falling behind.
|
|
- [PLACEHOLDER: 3-4 specific industry examples. Research closer to date for freshness.]
|
|
|
|
**Part 4: The Uncomfortable Truth (10 min)**
|
|
- The person leveraging AI will outcompete the person who isn't
|
|
- This isn't "AI will replace you" — it's "someone using AI will outperform you"
|
|
- Copilot-grade AI creates AI skeptics. Quality matters.
|
|
- "We invested in Claude on Bedrock because we believe in giving you the best tool, not the cheapest one"
|
|
|
|
**Part 5: Close (5 min)**
|
|
- Recap the multiplier: Dhruv = 3x on a $2.50 investment. What's your version?
|
|
- Tease Week 4: "Next week — your turn. What would YOU build?"
|
|
|
|
---
|
|
|
|
## Week 4: What Could You Build? (Interactive)
|
|
**Friday April 25 · 1 hour**
|
|
|
|
### Intent
|
|
Engagement, ownership, idea generation. This is where you plant seeds that grow while you're on paternity. The room should leave with concrete ideas they want to try, and ideally a few people who want to champion something.
|
|
|
|
### Structure (60 min)
|
|
|
|
**Part 1: Recap + "Things You Didn't Know Were Possible" (15 min)**
|
|
- Quick series recap (3 slides max)
|
|
- Then the mind-expanders — things the audience probably hasn't considered:
|
|
- Cross-platform automation (container → Mac → AppleScript → Apple Notes)
|
|
- Multi-agent orchestration (main agent delegates to specialists)
|
|
- Scheduled AI work (cron-style: "every Monday morning, summarise last week's tickets and email me")
|
|
- Browser automation (AI navigates web UIs on your behalf)
|
|
- Voice/TTS (AI calls you with a briefing — yes, really)
|
|
- Node control (AI operates remote machines)
|
|
- Hooks (external events trigger AI responses — monitoring alerts → investigation → report)
|
|
- n8n + AI (durable workflows with AI decision points)
|
|
|
|
- Frame it: "These are all real, working today. What would you use them for?"
|
|
|
|
**Part 2: Ideation — Small Groups (20 min)**
|
|
- Break into groups of 3-4 (or if virtual, breakout rooms)
|
|
- Prompt: "Think about your week. What's repetitive? What's tedious? What requires you to be the middleware between two systems? What knowledge is trapped in one person's head?"
|
|
- Each group picks their best idea
|
|
- [NOTE: If the room is too shy for groups, do it as a facilitated brainstorm with post-its/chat instead]
|
|
|
|
**Part 3: Share Back + Discussion (20 min)**
|
|
- Each group shares their idea (~2 min each)
|
|
- Conan and room react — "that's buildable", "that's a skill", "that needs MCP", "that's a Tuesday afternoon project"
|
|
- Capture everything visibly (shared doc, whiteboard, whatever)
|
|
- Identify quick wins vs bigger projects
|
|
- Ask: "Who wants to own this one?"
|
|
|
|
**Part 4: What Happens Next (5 min)**
|
|
- Conan's going on paternity leave (congratulations moment)
|
|
- The AI initiative doesn't stop — here's how to keep going:
|
|
- Bedrock access continues
|
|
- CLAUDE.md templates available
|
|
- Skill creation guide available
|
|
- Champions identified from today's session
|
|
- [PLACEHOLDER: Decide on a Slack/Teams channel, shared doc, or other continuity mechanism]
|
|
- "When I come back, I want to hear what you built."
|
|
|
|
---
|
|
|
|
## Cross-Cutting Notes
|
|
|
|
### Tone
|
|
- Professional but not corporate. The Simpsons gifs stay.
|
|
- Serious content wrapped in human delivery. No "enterprise transformation" jargon.
|
|
- Be honest about limitations — "AI hallucinates sometimes, that's real, here's how we handle it"
|
|
- The old guard needs respect, not condescension. Their skepticism is earned. Meet it with evidence, not enthusiasm.
|
|
|
|
### Demo Environment
|
|
[WAITING ON CONAN: Are we demoing from Bedrock/corporate setup, or personal OpenClaw? This shapes what tools/capabilities we can show.]
|
|
|
|
### Recurring Elements
|
|
- Open each session with a 1-min recap of last week (for anyone who missed it)
|
|
- Close each session with a teaser for next week (serial structure)
|
|
- "3x multiplier" is the recurring motif — reference it every week
|
|
- Keep a running "ideas board" across all 4 weeks — things that come up in Q&A get captured
|
|
|
|
### Media Assets Needed
|
|
- Existing: axway.png, grandpa-gpt.gif, homerspin.gif, happy2.gif, robot.png, duckgpt.gif
|
|
- Needed: [PLACEHOLDER: Screenshots of CLAUDE.md examples, Bedrock console, skill structure, Dhruv's tool (if he's ok with it)]
|
|
|
|
### Risks
|
|
- Demo failure during live presentation. Mitigation: pre-recorded fallback for each demo, but try live first. Live failures are actually ok if you narrate through them — "see, it's trying something else" is part of the story.
|
|
- Low engagement in Week 4 ideation. Mitigation: Seed 2-3 people in advance to kick things off. Conan's team (Dhruv, Saori) go first.
|
|
- Audience too diverse — technical people bored by basics, non-technical lost by demos. Mitigation: narrate demos at two levels ("what it's doing" + "why that matters for your work"). The sequence diagrams help non-technical people follow.
|