UX

ChatGPT Implications for UX

Travisse Hansen

May 11, 2023

Glassy ChatGPT Logo

iPhone Moment


Remember where you were when the first iPhone came out? Remember first pinching to zoom in or out on a picture or using slide-to-unlock. I personally remember where I was - in my friend’s basement, watching the Steve Jobs announcement repeatedly on Youtube. The touchable interface was so mind-boggling and simultaneously made so much sense. Here’s the original demo for those that don’t remember.


What followed that demo and the success of the iPhone was such a leap forward in terms of user experience that it rippled across every industry for literally decades. In the following years, every industry got hit with the pressure to adjust their UX to these new standards - the consumerization of enterprise software, almost the entirety of the fintech movement are just a few examples. In short, once someone experiences a massive leap forward in user experience, the question becomes: why can’t all apps work like this?


ChatGPT has had its iPhone moment. It’s possibly the most demo-able piece of software the world has ever seen, and once you start using it instead of Google or Wikipedia or Stack Overflow, it’s hard to imagine going back. The bar has been raised.


What This Means for UX


We at Denada Design think there are two takeaways to immediately apply to your product - one incremental and the other more radical.


  1. Users Are Editors First, Creators Second


For starters, every product that involves creating (tools for marketing, coding, design, documentation, social media, etc.) should now completely eliminate the blank page problem. No user that has even a foggy idea of what they want, should have to draw the first lines, or write the first words. We are all editors first, creators second now.


Implementing this from a UX perspective is fairly straightforward. We’re used to products that have templates and more than 100M people are now used to ChatGPT so prompting and using pre-built content isn’t terribly new.


The only UX nuance here is giving the right content to start, the right knobs to adjust it, and adding prompting as it makes sense. An example of this is something we recently worked on with a customer that does text marketing for restaurants. You can see below where they let customers adjust the tone of their marketing texts using traditional inputs as well as a prompt which then creates suggested content for their texts.

  1. Composable Interfaces


This change is much more radical in nature.


Apps that are built pre-GPT have mostly static interfaces. Let’s take the example of a workout app, using Strong App specifically (great app by the way).



A user can select from various templates which have various exercises inside of each template. If they want a totally new workout they either 1. need to create it themselves using the exercises as building blocks or 2. if the app isn’t flexible then they’re simply out of luck.


In an AI world, it doesn’t have to be this rigid. We believe a new paradigm will be to define flexible components and then let your AI arrange and deliver them in the way that makes the most sense. In the workout example, a user could say, “I’m wanting to work on my calves and shoulders - can you make me something for that?” And the workout app can say, “Sure thing!” and go ahead and create a totally custom workout from the library of exercises and UI components for each one. This is straight forward in the workout case but the implications are profound.


What should live on a dashboard or reports area for example? Should it be static or should there be a set of UI components that can be filled with data? Furthermore how do you track usage in this world? How do you decide which UI elements must remain static and which should be turned into components?


We’re currently working with a client on this as well in the fintech space (https://get.voltamoney.com/volta-app/) and will update here as we explore these concepts further.