Function calls are the unsung hero of LLM UI manipulations. While OpenAI has made great strides leveraging function calls to manipulate the UI in their demos, the rest of industry is yet to take its first meaningful steps. But what do those steps look like?
Most chatbots stick to one modality—either text or voice. But as someone who uses subtitles for everything, I wonder why voice bots don’t also include text for accessibility. Is it a limitation in the voice tech stack? Does text clutter the UI? To find out, I decided to build my own streaming-first chatbot interface with both text and voice.
Think chatbots replace all web UIs? Not so fast. Explore the strengths of buttons, forms & tables in e-commerce (Apple, Taylor Stitch examples) vs chatbots and see where each truly fits.