4 Comments

Great and comprehensive overview of a strangely overlooked subject. Just shared on LinkedIn.

Two observations:

- As a UI/UX professional, I prefer to frame design problems and solutions around "user goals" vs "following user intent." The former is a slightly more general category. Users don't always know their intentions when they visit a new site or app, but they generally have a goal (entertainment, learning, professional development, etc).

- For your excellent list of best practices, I would add one more to strive for: reproducibility. I realize this is more challenging with AI. It's true you usually don't get the same probability matrix twice! But building in mechanisms to capture and report serious errors is essential -- otherwise we lose the entire mindset and ethos for QA in software. That would be unfortunate, because our process works remarkably well compared to other industries...

Expand full comment
author

First, thanks for your positive feedback!

It's a good point about goals v intent. both in specific use-cases (e.g. just 'playing around' with a character AI) and in broader terms (as we are still learning what AI can do for us) user goals and intent is rather fuzzy. We don't often have a specific goal trying to get gen AI to make an image or a song - just something that strikes our fancy. But I typically have some intention or goal in my 'minds eye' about it. The 'magic prompt' is a bit of a parlor trick - they guess at a more specific intention, randomly, and hope it matches user expectation.

LLMs are stochastic and not predictable, which makes reproducibility a challenge in AI apps. Another hard goal for LLMs is reliability. However, QA and testing has been used for software applications with randomized behavior. Tools like LangSmith (companion to LangChain) can be used to capture behavior of prompt chains and AI agents, etc. I'm sure toolsets and practices will evolve and mature. As with UI/UX, maintaining principles but applying it in new ways for a new domain.

Expand full comment

Yup. It's an evolving discipline, as for that matter is UX. Do you know of any organizations or working groups that are dedicated specifically to improving QA process and best practices as they apply to the AI development model?

This is a practical, not abstract, question. Particularly when applying AI for a new or different use case, quirks and odd behavior are inevitable. It seems really strange not to have a standard system for ticketing and logging issues.

Expand full comment
author

I suspect existing software engineering organizations in the field will orient best practices to take on the case of AI applications. As for practical ways of meeting up with those dealing with these questions, I use meetup to find AI related meetings and groups both physical (in Austin TX) and virtual. I gave a talk last month on building AI that touched on some of the above topics. I'm also going to the AI Engineers Worlds Fair in SF in late June and will likely write articles on it (like I did on their previous one in October), and I'm sure some talks there will be on UI / UX for AI apps.

Expand full comment