Training
Upcoming: a public workshop on vision language models as design material →
Select training now available for professionals on an individual basis, taught live in an online meeting, in the studio style.
“Vitorio’s deep research expertise let him hit the ground running within days… We learned new things…almost immediately… Our team has found them fascinating, and it’s been fantastic to see our customers understand what we’re trying to do. Vitorio’s technical background and industry experience informed his prototypes… and some of the proposed changes could save us significant development time and effort.
We strongly recommend Vitorio for any technical business that wants to support its new work with deep customer insight. We look forward to bringing him back for future research, prototypes, and training.”
— Andrew Sampson, COO; and Lawrence Chan, CEO; Akliz, Inc.
Select in-depth LinkedIn posts
…on vision language models:
- December 17, 2025: 💬 Are your designs being held back by accepting the default LLM settings your engineers picked for you, or your AI chat app uses?
- December 16, 2025: 👁️ Does the new AI feature you're designing assume your users already know as much about generative AI as you do?
- November 19, 2025: ❓What are vision language models (VLMs)? ❓Why do I need to treat them like materials?
…on AI agents:
- December 2, 2025: 🤖👀👺 What happened when I mixed AI agents, vision models, and a wild conference demo from 2024?
- November 20, 2025: 🖥️ What work do you do that NEEDS a computer, not just a web browser?
- November 18, 2025: 🐕 Eight months in AI is like dog years: the entire "agent" market has changed.
…on research, prototypes, and training:
- December 9, 2025: ❓ How do you explain a room-scale experience to someone who can't share in it?
- November 6, 2025: 🧓☁️ A colleague asked if design innovation died in the 2010s, since so many of my training examples are from then or earlier.
- November 4, 2025: 🎨 If your UX team is giving reskinned chat UIs instead of new ways to interact with AI, maybe they don't trust you.
- October 21, 2025: 💎 Someone's going to ask if you can design a future Friend AI necklace or a second-gen Meta Neural Band. Could you?
- August 28, 2025: 🤬 A colleague was failed by Microsoft Word in a way that Jef Raskin would have called inhumane.
- August 26, 2025: Did you have it on your bingo card that regular GPT-4 would beat GPT-4o / Gemini 1.5 Pro / Claude 3.5 Sonnet for heuristic evals in 2025?
- July 28, 2025: Artur M. at VisualSitemaps asked me if LLMs doing heuristic evals had improved this year, and, yes, kinda?
- May 27, 2025: To my workshop participant who was surprised at how exploration and research could inform their design process, Matt Webb has a great example.
- April 22, 2025: Five years ago, author Robin Sloan wrote "An app can be a home-cooked meal"; vibe coding means apps can also be instant ramen or TV dinners.
- March 10, 2025: Vibe coding to production is the new immutable microservices.
- December 31, 2024: If you're serious about having generative AI conduct heuristic evaluations, there are open but tractable questions, and opportunities for novel research, before you're likely to get to a robust solution:
- December 30, 2024: For my UX, user research & usability testing colleagues: GPT-4 might find half the same issues a human would in a heuristic evaluation.
- November 26, 2024: "I know that's not what it's for, but I'm going to steal that," my workshop participants said to me last week. 🦹
- November 15, 2024: While catching up with colleagues old and new at Funsize's Big Little Print Show opening, I got to see a designer thank their former professor.
- June 3, 2024: What if your audio and video recordings of field studies and in-depth interviews could automatically preserve participants' privacy?
- February 8, 2024: This instant camera with generative AI was made possible in part by the VR Austin Jams in 2018 and 2019. Follow along: