Hands-on Lab
Follow along step by step. Each task builds on the previous one, progressing from simple queries to autonomous, multi-step agentic workflows. No prior AI experience required.
Before You Begin
Before we start, let's make sure your environment is ready. Check off each item below. If anything is missing, visit the Setup Guide first.
- VS Code is installed and open
-
Node.js (v18+) is installed: run
node --versionto verify -
Gemini CLI is installed: run
gemini --versionto verify - You have a Google account signed in (Gemini CLI will prompt you on first run)
-
You created a practice folder to work in (we will use
~/ai-workshop)
Ctrl + ` (backtick) on Windows/Linux, or Cmd + ` on Mac. The terminal panel opens at the bottom of the editor.Create Your Practice Folder
Open your VS Code terminal and run these commands to create and navigate to your workshop folder:
Launch Gemini CLI
Type gemini and press Enter. The first time, it will ask you to sign in with your Google account.
Welcome to Gemini CLI! You are now chatting with Gemini. Type your message and press Enter to send. Press Ctrl+C to exit. >
Explain & Verify
The first rule of the Agentic Paradigm: Trust, but Verify. Before letting an AI write or modify anything, we use it to explain existing work. This builds your instinct for what good AI output looks like, and it is the safest way to start.
Ask Gemini to Explain Something
Let's start with the simplest possible interaction. Type this exact prompt at the > cursor. This is a "knowledge query," meaning you are asking the AI to retrieve and synthesize information.
Gemini should produce a structured response with: 1. A clear definition of agentic AI 2. Three comparison examples (e.g., "Regular: 'Write me an email' vs Agentic: 'Draft a follow-up email based on my last 3 conversations with this contact'") 3. An emphasis on autonomy, tool use, and chained reasoning
Read the output carefully. Does the explanation make sense? Is anything missing or inaccurate? This is the core skill: evaluating AI output with a critical eye.
Audit Real Code
Now let's make it practical. We will ask Gemini to analyze a real HTML file. First, create a small test file for the AI to examine:
test-page.html in your ai-workshop folder. This is the difference between chatting and doing.Now ask Gemini to audit its own work:
Gemini should return a structured review like:
Issue 1 (Medium): Missing viewport meta tag
- The page won't render correctly on mobile devices.
- Fix: Add <meta name="viewport" content="width=device-width, initial-scale=1.0">
Issue 2 (Low): Button lacks aria-label
- Screen readers won't know what the button does.
...
This is a "Human-in-the-Loop" audit: the AI identifies issues, you decide what to fix.
Verify and Correct
Now verify the AI's claims yourself. Open test-page.html in your browser (right-click the file in VS Code and select "Open with Live Server" or just double-click the file). Then ask Gemini to fix the issues it found:
🔧 Troubleshooting Lab A
"Gemini did not create the file" — Make sure you are in the correct directory (ai-workshop). Gemini creates files relative to where you launched it.
"The code review seems incomplete" — Try being more specific: add "Check WCAG 2.1 AA compliance" to your review prompt for deeper analysis.
"My file looks different from the example" — That is expected! AI output varies between sessions. Focus on the structure (did it find real issues?), not exact wording.
Automate & Dry Run
Now we level up. Automation does not mean giving up control. It means delegating the boring parts so you can focus on decisions. The key technique is the Dry Run: ask the AI to propose changes before you approve them.
Create Multiple Test Files
To practice automation, we need a few files to work with. Ask Gemini to scaffold a small project:
Wait for Gemini to finish. Check your file explorer. You should now see 4 new files in your folder.
The Dry Run: Propose Without Changing
Here is the critical technique. We will ask Gemini to analyze all the HTML files and suggest improvements, but we will explicitly tell it not to modify anything yet:
Gemini should output a review table like: | File | Meta Description | OG Title | Issues Found | |--------------|-------------------------------|----------------|---------------| | index.html | "Personal portfolio of..." | "My Portfolio" | Missing lang | | about.html | "Learn about my background.." | "About Me" | No alt on img | | contact.html | "Get in touch via this..." | "Contact" | Form missing | | | | | action attr | No files were modified. You are in full control.
Approve and Apply
After reviewing the proposals, pick the changes you want and tell Gemini to proceed. You can approve everything, or be selective:
Or, be selective:
Verify the Changes
Always verify. Ask Gemini to confirm what it changed:
Open the files in VS Code and compare. Does the output match what Gemini claimed? Building this "trust but verify" habit is essential for professional AI use.
🔧 Troubleshooting Lab B
"Gemini modified files despite my DO NOT instruction" — The AI sometimes misinterprets constraints. If this happens, use Ctrl+Z in VS Code to undo, and rephrase your constraint more firmly: "LIST the proposed changes only. Do NOT edit any files."
"The table output looks messy" — Try adding "Format your response as a clean markdown table" to get more structured output.
"Only 2 files were created instead of 4" — Sometimes Gemini batches file creation. Simply ask: "Please create the remaining files from my previous request."
Agentic Refactor
This is where it gets powerful. Instead of giving the AI one instruction at a time, we will ask it to create a plan and then execute that plan step by step. This is the heart of "agentic" behavior: the AI maintains context across multiple actions, like a colleague following a project brief.
Generate a Roadmap
Ask Gemini to analyze your project and create a structured improvement plan. This is like giving a junior engineer a design review assignment:
Gemini creates "improvement-roadmap.md" in your folder. Open it in VS Code. You should see a structured document like: # Project Improvement Roadmap ## Current State - 3 HTML files, 1 CSS file - Dark mode styling, basic semantic structure... ## Improvement 1: Add Viewport Meta Tag (HIGH IMPACT) **Why:** Without it, mobile users see a desktop-sized page... **What:** Add to index.html, about.html, contact.html **Time:** 2 minutes ## Improvement 2: ... This is now a persistent "brain" that the AI can follow.
Execute the Roadmap Step-by-Step
Now comes the agentic part. Instead of manually implementing each improvement, you tell the AI to follow its own plan:
After it completes, continue with the next one:
Ask for a Summary Report
After completing a few improvements, ask the AI to document its work. This creates an audit trail:
🔧 Troubleshooting Lab C
"The roadmap file was not created" — Gemini may have displayed the roadmap in the terminal instead. Ask: "Save that roadmap as a file called improvement-roadmap.md"
"The AI lost context between improvements" — Long conversations can cause context drift. Remind it: "Refer back to improvement-roadmap.md for context before proceeding."
"Changes broke existing functionality" — This is normal in iterative development! Ask Gemini: "Undo the last change and try a different approach that preserves the existing navigation."
Prompt Engineering Patterns
You have now used Gemini CLI for three different workflows. But the quality of your output depends entirely on the quality of your input. In this section, you will learn structured prompt patterns that dramatically improve results.
The CRISP Framework
Use this five-part structure for any complex prompt. Each letter stands for a component that makes your prompt more effective:
Background information the AI needs
Who should the AI act as?
Step-by-step task description
Constraints, format, length limits
Audience and tone of the output
Practice: Transform a Weak Prompt
Here is a common, weak prompt. Your task is to rewrite it using the CRISP framework, then compare the outputs:
Now try this CRISP-structured version:
The weak prompt will produce a generic, unstyled page. The CRISP prompt will produce a polished, portfolio-ready site with semantic HTML, responsive design, and professional aesthetics. Notice how specificity controls quality.
Prompt Cheat Sheet
Save these power phrases for your future workflows. They consistently improve AI output quality:
"Think step by step"
Activates chain-of-thought reasoning
"DO NOT modify any files yet"
Prevents premature execution
"Show me a diff before applying"
Forces preview of changes
"Format as a markdown table"
Structures output for readability
"Rate each issue: low/med/high"
Adds priority to reviews
"Explain your reasoning"
Makes the AI show its work
Design Your Own Workflow
Now it is your turn. Using the three patterns you just mastered (Explain, Automate, Refactor), design and execute your own agentic workflow on a topic that matters to you. Here are some ideas to get you started:
Data Dashboard
Ask the agent to create an interactive HTML dashboard that visualizes climate data, satellite telemetry, or any dataset you choose.
Research Summarizer
Paste in a section of a research paper and have the agent produce an executive summary, key findings, and a limitations analysis.
Automation Script
Describe a tedious task from your actual work (renaming files, formatting reports, parsing data) and have the agent write a script to automate it.
Mission Briefing Generator
Create a reusable template that generates professional mission briefing documents from raw input parameters like orbit type, payload mass, and launch window.
Meeting Minutes Extractor
Paste in a rough transcript or notes from a meeting, and have the AI produce structured minutes with action items, decisions, and owners.
1. Context: What does the AI need to know?
2. Task: What should it do?
3. Constraints: What are the boundaries? (e.g., "do not modify existing files")
4. Output Format: How should the result look?
Workshop Complete
Congratulations! 🎉
You have completed the WIA-Europe AI-W Workshop: Skills in Motion. Here is what you accomplished today:
- ✅ Lab A: Used AI to explain and audit code (Human-in-the-Loop)
- ✅ Lab B: Automated repetitive tasks with a Dry Run workflow
- ✅ Lab C: Orchestrated a multi-step refactoring roadmap
- ✅ Lab D: Mastered prompt engineering with the CRISP framework
- ★ Challenge: Designed your own agentic workflow
You are no longer just a "prompter." You are a director of AI systems. Take this skill into your career, your research, and your teams.