
In early 2023 I had the opportunity to design the interface and user experience of an innovative agentic AI copilot, that enabled a much more efficient search experience in the digital twins product catalog of the 3DOptix optical design and simulation software.
Innovative UX features included:
Versioning
Tolerance controls and feedback
Prompting guides
A great search experience is essential for 3DOptix, allowing users to efficiently navigate the product catalog, find and select components, and add them to their optical design workspace.
3DOptix product catalog with OptiChat intro screen, chat screen, search history tabs, tolerance indicators and OptiChat button
The 3DOptix Software
A web app for simulating light and optics
Design optical setups in 3D space with digital twins of real optical elements
3DOptix users
Optical engineers
Physicists
Commercial research labs
Academia: Students, labs, educators
UI Design
Problem and Requirements
The initial requirements I discovered by interviewing stakeholders and experts:
AI should perform searches and calculations in the Product Catalog based on user prompts
Users should be able to set tolerance levels for search result precision
Plan ahead for the integration of additional AI functions in the future (e.g., building, analyzing, etc.).
Research and Discovery
Research
To address the requirements, I sought references for two specific interaction patterns:
1. AI chat interfaces with interactive UI elements, beyond just the basic functions like "send" or "like."
2. AI assistants that are capable of manipulating software features outside the AI interface.
Discovery and first prototype
First I created a conversation chart to get started with the conversational design and in general wrap my head around what we were going to do.
Experimental conversation flow
Next, I asked our AI engineer for a real conversation example from his code. I then created three versions of it to explore how the conversation would flow with the tolerance meter and the existing catalog filters. Below are segments of the initial conversation tests and interactive flows I designed.
Designing for Versioning
Displaying past search results
Display results button inside the chat history itself

Tabs that save each set of search results for revisiting

Anticipated user need:
Re-display past search results during a search session
(Scrolling through chat history to find previous results is inefficient and disrupts the workflow. Users need a streamlined way to revisit results for quick comparison and better decision-making.)
My initial hypothesis:
Have the AI display a “Display results” button for each set of results
Thus enable users to scroll back in the chat and find and re-display former sets of search results
Insights after testing the hypothesis with a prototype:
“The Display results” button feels disruptive and impractical due to excessive scrolling
It should be replaced with a different solution
Reframed challenge after testing:
The OptiChat AI assistant can generate many different sets of search results during a session. Users should have easy access to all of them and be able to compare them
Solution:
Implement a new search results tab for each created set of search results
User can switch between, delete and reset
Designing for Confidence
Tolerance meter vs. tolerance indicators
The "Tolerance meter" for users to set the desired accuracy of search results

Tolerance indicators that indicate the accuracy of the search results

Anticipated user need:
Set the search results precision
(Users expect AI to be perfect, but even the best AI assistants are just super smart guessers—so tolerance indicators keep everyone on the same page)
The initial solution that the stakeholders wanted:
Implement a "Tolerance meter", a slider with 5 steps that range from Low to High precision
Insights after testing this suggested solution with a prototype:
The tolerance meter felt redundant
Users would naturally expect the AI to provide the most precise results available. If exact matches aren’t found, the AI should offer less precise alternatives and communicate this to the user
Reframed challenge after testing:
Users need a way to stay informed about the precision of the offered search results
Solution:
Add a tolerance indicator to each search result
Designing for Flexibility and Control
Filtered vs. AI product search
The catalog in manual mode
The catalog in OptiChat mode - no manual filtering
Anticipated user expectation:
AI search should take current filter selection into account.
Insights after testing this assumption:
The AI engineer tested this idea and concluded that it is not technically feasible.
Resulting problem:
How do we prevent users from applying filters during AI searches
Solution:
Remove all filter related UI elements when OptiChat window is open
OptiChat button toggles between filtered search and AI assisted search
Designing for Knowledge
Onboarding
Splash screen with sample prompts
Info popup with 3 tabs: Capabilities, Tips and Limitations
Incorporating User Feedback
Improving the model and the user experience
User reviews happen in a separate chat window
Current Implementation (Beta)
What else?
For more extensive usability testing for this feature, I would write usability tests for measuring the metrics in the diagram below and work together with a UX research specialist.
OptiChat success metrics to measure
“Everything counts (in large amounts)”
Depeche Mode
Future OptiChat
The next version of OptiChat will be one that builds optical setups, performs analyses and writes code. I created some UI Designs for the chat window but they are very preliminary and there will be many different context dependent UI elements involved.