Vision language models as a design material
Learn to directly shape AI for products that “see,” so you can ship the right thing, influence engineering decisions, and demonstrate your value.
- A three-hour, live workshop, online over Zoom, with working time and guided critique
- Learn a repeatable method for testing and documenting VLM behavior
- No technical experience required
Vision language models (VLMs) are AI systems that can “interpret” images and respond to them. If AI-powered “seeing” is a core part of what you’re building, this workshop helps you get hands-on enough to stay in the room when the hard decisions get made.
You’ll leave with outcomes you can use immediately
You’ll learn how to:
- Treat a vision language model as a design material, the way a craftsperson learns the behavior of clay or wood
- Run a material exploration: structured experimentation to reveal strengths, edge cases, and failure modes
- Use constraint exhaustion to map where the model breaks (so your product doesn’t)
- Translate VLM capabilities into real product opportunities (and rule out mirages early)
And, you’ll take home:
- Extensive references for everything we discuss (to remind you later)
- A cheat sheet of VLM constraints to exhaust (what to test, and why)
- A digital certificate to prove your AI knowledge to peers and future employers
Don’t get left out of key product decisions
When the core of a product is highly technical — AI vision, sensors, “smart” features — designers with primarily 2D practice (UX, interaction, visual design, research) often get boxed out of the parts that determine what’s actually possible.
Your usual design artifacts — flows, mockups, prototypes, storyboards — are great at communicating intent. But, with vision language models and other complex technologies, they often leave critical questions unanswered, forcing engineering to guess, compromise, and ship avoidable failure modes.
This workshop gives you a way to generate empirical, defensible knowledge about the model’s behavior, so you can:
- specify what the system should do and what it can’t reliably do
- design the output (not just the UI around it)
- make tradeoffs earlier, with evidence
- align engineering, product, and stakeholders around shared constraints
I was surprised to see how this applies to industrial design — and humbled by the amount of considerations it takes to launch a physical product.
A proven method
The design techniques in this workshop were formalized in the late 2000s, when designers and academics developed methods and practices for non-engineers to make sense of technologies, from NFC tags and GPS signals, to mobile phone sensors and physical AI. I began teaching it in 2012.
For over a decade, this framework has helped designers explore sensors and prototype smart devices without needing to learn to code or become electrical engineers. Now, I’ve updated this rigorous, material-based approach specifically for the current generation of generative AI.
Proof you can show your org (and future employers)
If your organization values formal, legible learning: you’ll receive a digital certificate of completion for the public workshop.
Inside the workshop
- Level‑set on vision language models: what they are, how they work
- Map capabilities to use cases: find quick wins and non‑starters for your product
- Material exploration: an end‑to‑end example of constraint exhaustion and documentation
- Group work: plan tests, plan documentation, practice with a VLM, and get critique throughout
There will be two 5m breaks.
The third hour is entirely practice with the vision language model.
What model will we use?
We’ll use Perceptron Isaac 0.1, a “perceptive-language” model trained to “understand” the physical world.
It can:
- “answer” questions about images
- point to / highlight regions and explain its “reasoning”
- read text in images
- take a set of images as examples
The point isn’t to become an “Isaac expert.” The point is to learn a method you can apply to any vision-capable assistant or VLM.
Prerequisites
No technical experience necessary.
Ensure your Zoom app is updated to the latest version.
Visit training.tertile.one from the computer and internet connection you will be participating from ahead of the workshop, and contact with any issues for troubleshooting steps: .
Who it’s for
This is for you if you’re a:
- UX / interaction / visual designer (especially if your background is primarily 2D)
- product designer working on AI-enabled features
- researcher or product manager who needs to spec and evaluate VLM-driven experiences
- non-engineering professional who must collaborate closely with engineering on AI vision behavior
This is especially useful if:
- you’re building a feature where images are inputs or context
- you need to define what “good” looks like for model output
- you’re tired of learning about AI models only through demos and hype
About the instructor
A public workshop from Tertile, LLC, led by Vitorio Miliano.
Vitorio Miliano has trained designers to work with advanced technologies as design materials for over a decade, from small shops to the Fortune 5. He brings experience across product management, software development, research, and design, with a track record of evidence-based decision-making at program and product levels.
Prior to Tertile, his past work includes a healthcare news briefing on the Amazon Alexa platform, a 3D environment used by NASA to visualize the International Space Station, and building research and developer-relations programs in industry.
Alternative scheduling (private groups)
For private trainings for teams/groups of 4+ (including corporate, on-site in-person, or full-day sessions), contact: , .
Frequently asked questions
Billing
Public workshops booked by credit card are billed immediately. Refunds are available by request until one business day before the workshop. Contact for refunds or alternative billing arrangements: , .
Materials
All original materials provided to participants are licensed by Tertile, LLC for their private, personal use, and are not to be shared or redistributed. Licenses are revoked in the event of a post-training refund or chargeback, and all materials must be destroyed.
Terms of service and privacy policy
Payment for and participation in training is governed by Tertile, LLC’s terms of service (PDF) and privacy policy (PDF).