Compare commits

...

11 Commits

Author SHA1 Message Date
Clawdbot
179115632b Paris AI Forum ideas 2026-02-05 04:22:32 +00:00
7eb9a0a790 Add 3x multiplier talking point 2026-02-01 11:45:28 +00:00
dcab388256 Fix: use plain text content 2026-02-01 11:35:44 +00:00
6b8432eed8 Fix scratchpad content (was double-encoded) 2026-02-01 11:28:07 +00:00
e0caa2e32d Add presentation ideas scratchpad 2026-02-01 10:41:22 +00:00
a62361ffdc pdf 2026-01-20 14:17:32 +11:00
e035ef36f6 removed node stuff 2026-01-19 22:03:12 +11:00
c360dc4fd3 rm .DS_Store 2026-01-19 22:02:23 +11:00
9cd2b0a8b2 deleted node_modules 2026-01-19 20:49:54 +11:00
e02eb96001 new push 2026-01-19 20:37:55 +11:00
a4d9138f01 updated MCP mermaid diagram to show confirmation
added prompt engineering slide

TO-DO. start on demo slides and notes
2026-01-19 18:56:19 +11:00
10 changed files with 179 additions and 24 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 960 KiB

View File

@@ -1,6 +0,0 @@
{
"name": "01-claude-ocp",
"lockfileVersion": 3,
"requires": true,
"packages": {}
}

View File

@@ -1,6 +0,0 @@
{
"name": "01-claude-ocp",
"lockfileVersion": 3,
"requires": true,
"packages": {}
}

View File

@@ -1 +0,0 @@
{}

View File

@@ -42,7 +42,7 @@ What is MCP?
<!-- new_line -->
<!-- alignment: center -->
<!-- font_size: 2 -->
**MCP gives tools to AI**
MCP gives tools to AI
===
![image](robot.png)
<!-- no_footer -->
@@ -123,9 +123,11 @@ sequenceDiagram
LLM->>ArgoCD: Get GitOps application
LLM->>Git: Get Git repo
LLM->>LLM: Plan fix
LLM->>You: Here is the plan. Requesting Approval
You->>LLM: Approved
LLM->>Git: Push fix to Git
LLM->>OCP: Verify change applied
LLM->>You: The issue was X. I resolved by doing Y
LLM->>You: Applied Succesfully
```
<!-- pause -->
<!-- column: 1 -->
@@ -135,6 +137,9 @@ sequenceDiagram
<!-- new_line -->
<!-- new_line -->
<!-- new_line -->
<!-- new_line -->
<!-- new_line -->
<!-- new_line -->
![image:width:100%](homerspin.gif)
@@ -176,14 +181,32 @@ Tools for the demo
<!-- new_line -->
* `argocd-mcp-server` - ArgoCD (aka Openshift GitOps) application management
<!-- new_line -->
* `minio-mcp-server` - S3-Compatible Object storage operations
<!-- no_footer -->
<!-- speaker_note: Explain the tools in the wider context -->
<!-- speaker_note: openshift - complete CRUD access -->
<!-- speaker_note: gitea - ability to read and commit, create repos etc. -->
<!-- speaker_note: ArgoCD - add and keep deployments in a specified state -->
<!-- speaker_note: MinIO - self-hosted S3 -->
<!-- end_slide -->
<!-- font_size: 4 -->
Prompt Engineering
===
<!-- font_size: 2 -->
Intent vs. Instruction
===
<!-- font_size: 2 -->
![image:width:100%](table.png)
<!-- pause -->
<!-- new_line -->
<!-- new_line -->
<!-- font_size: 2 -->
Ambiguity is the enemy of automation. If you dont define safe… the AI will.
<!-- end_slide -->

Binary file not shown.

BIN
01-claude-ocp/table.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

View File

@@ -1,7 +0,0 @@
![image:width:50%](happy.gif)
```bash +exec_replace
mpv --no-config --vo=kitty --profile=sw-fast --really-quiet happy.webm
```

View File

@@ -0,0 +1,38 @@
# Paris AI Forum: Hyper-Agentic Demo Ideas
**Goal:** Demonstrate the difference between "Chatting with AI" (Hope) and "Agentic Workflows" (SOP/Action).
**Constraints:** Must be impossible/hard for a standard web LLM. Must show tools, file manipulation, or system state changes.
---
## 1. The "Instant Briefing" (Research Loop)
**Concept:** Turn a vague question into a tangible artifact in < 2 minutes.
**The Prompt:** "I have a meeting with the CTO about [Emerging Tech Topic] in 5 minutes. Build me a briefing pack."
**The Agent Run:**
1. **Search:** Scours the web for the latest 3-5 credible sources (filtering spam).
2. **Synthesize:** Writes a structured `EXECUTIVE_BRIEF.md` (Key Trends, Risks, Opportunities).
3. **Generate:** Uses `presenterm` (or `pandoc`) to auto-generate a PDF/Slide deck from that markdown.
4. **Deliver:** Pushes the PDF to a shared repo or Slack channel live.
**The "Wow":** Moving from "text output" to "file delivery" without human copy-pasting.
## 2. The "Infrastructure Surgeon" (Self-Healing)
**Concept:** AI that can touch the metal.
**The Setup:** A broken service on a demo OpenShift namespace (e.g., a pod crash loop or wrong env var).
**The Prompt:** "The `demo-app` is down. Fix it."
**The Agent Run:**
1. **Diagnose:** Runs `kubectl get pods`, `kubectl logs`, `curl localhost`. Identifies the error (e.g., "Port 8080 refused").
2. **Reason:** "Config map points to 8081, app listens on 8080."
3. **Act:** Runs `kubectl patch` or edits the deployment manifest.
4. **Verify:** Runs `curl` again to confirm 200 OK.
**The "Wow":** Breaking the "Read-Only" barrier. The AI *changed state* to solve a problem.
## 3. The "Meeting Artifact Generator" (System Integration)
**Concept:** Turning unstructured voice/text into structured systems of record.
**The Setup:** A raw transcript or audio file of a chaotic meeting.
**The Prompt:** "Process this meeting dump. Update our systems."
**The Agent Run:**
1. **Parse:** Extracts Action Items, Decisions, and Deadlines.
2. **Ticket:** Creates actual Issues in Gitea (or Mock Jira) for each action item.
3. **Schedule:** Generates an `.ics` calendar invite for the follow-up.
4. **Notify:** Sends a summary message to a Telegram/Slack group tagging the owners.
**The "Wow":** The AI interacting with *multiple* disparate APIs (Git, Calendar, Chat) to orchestrate a workflow, not just summarize text.

114
SCRATCHPAD.md Normal file
View File

@@ -0,0 +1,114 @@
# Presentation Ideas Scratchpad
Raw ideas and scenarios for future AI presentations. Capture now, refine later.
---
## Presentation #1: MCP in Practice (OpenShift/GitOps)
**Audience:** Internal Axway team (small, reports to Conan)
**Tone:** Serious content, Simpsons humor, relatable frustration → payoff arc
### Core Thesis
The value of MCP isn't "AI can run kubectl" — it's **flipping the discovery loop**.
- **Without MCP:** Human works *for* the AI (copy-paste errors, run commands, provide context, be the middleware)
- **With MCP:** AI works *for and with* the human
### Narrative Arc
1. **What is MCP?** — Open standard, Anthropic 2024, gives tools to AI
2. **Why MCP? (Without)** — Grandpa Simpson walks in, sees GPT, walks out
- *Pitch:* Complex problems without tool-bearing AI = frustrating, time-consuming, upside-down. People "bounce off" AI for real work.
3. **Why MCP? (With)** — Setup for the payoff
4. **Tools for the demo** — openshift-mcp, gitea-mcp, argocd-mcp
5. **Prompt Engineering** — "Intent vs. Instruction" / "Ambiguity is the enemy of automation. If you don't define 'safe'… the AI will."
6. **DEMO** — Three scenarios, progressively impressive:
- **Scenario 1:** "Run a healthcheck of my cluster"
- AI examines multiple resources
- Key: produces **INSIGHT**, not just dry information
- **Scenario 2:** "Is my NAS slower than it should be?"
- AI launches ephemeral test pod (permitted via system prompt)
- Reasons against hardware expectations (2-bay 5400RPM, 1Gb network)
- **Scenario 3:** "Lock down security in this namespace"
- AI investigates current posture
- Plans path to privileged-v2
7. **Summary***(to be drafted)*
- Land the loop-flip insight
- "What you didn't see" — all the commands, errors, iterations handled silently
8. **Closer** — Homer spinning in chair (callback)
- "Taking us out of the discovery loop leaves us free to be productive elsewhere"
### Demo Technique
During demo: **actually do other work** while AI runs. Makes the "frees you up" point concrete — audience watches AI work, watches presenter work, realizes neither needed the other.
### Key Moments
- "Holy shit, it just figured it out" — if presenter had these moments building the demo, audience will too
- Transition from "tech demo" to "I want this"
---
## Presentation #2: Ideas (TBD)
**Context:** Axway heritage — Sopra Group → Sopra Steria. Enterprise software, API management, governance. Audience cares about security, compliance, "don't let the AI rm -rf."
### Option A: Security / AI Gateway
- Take the security line further than system prompts
- Show how an **AI Gateway** solves remaining concerns:
- Rate limiting
- Action allowlists
- Audit trails
- Policy enforcement
- Narrative: "Your enthusiastic AI colleague needs guardrails in enterprise"
### Option B: Hyper-Agentic First, Then Governance
- Go full agentic — show the power (browser control, cross-platform nodes, compound tool use)
- *Then* pivot: "And this is how you keep your enthusiastic colleague from doing something you didn't want"
- AI Gateway as the answer to the "wait, but what if..." concerns the demo just raised
### Potential Beats
- Show the Apple Notes/Reminders cross-platform demo (container → Mac node → AppleScript)
- Browser automation
- Multi-step reasoning with real consequences
- Then: governance layer, audit, policy, enterprise controls
---
## Loose Ideas (Capture as they come)
### Cross-Platform Automation Demo (Apple Notes/Reminders)
**Already documented above — usable for Presentation #2 "hyper-agentic" section**
- AI in secure container reaches out to Mac node
- Original tool (memo CLI) too interactive
- AI adapts: discovers AppleScript as native alternative
- Creates notes/reminders via osascript
- "Figures it out on its own" moment
### Meta-Demo: The Session Itself
The conversation that led to this scratchpad is itself demo-worthy:
- Casual experiment → discovery
- Presenter mentions "might use this for a demo"
- AI offers to capture it (not requested)
- Writes structured doc to the right repo
- That's working *with* someone, not commanding a tool
---
*Add ideas below as they emerge*
### The 3x Multiplier (Key Talking Point)
Common framing: "AI will replace all our jobs" — scary, paralyzing, defensive.
Better framing: **"The person leveraging AI will outcompete the person who isn't."**
**Example from practice:**
- Conan wanted to do a manual task (building muscle memory)
- Asked AI: "While I'm doing this, write an MD with X, Y, Z bullet points and anything else you see fit"
- Result: well-structured doc + *additional valuable points the AI added on its own*
**The math:**
- 2x = doing two things at once (parallel work)
- 3x = parallel work + AI's *additive* contribution (compounded output)
This is the multiplier people miss. You're not just parallelizing — you're getting value you didn't explicitly ask for.