May 7, 2024

How We Improve Service and Profits with AI - Roman Pedan, Kasa

How We Improve Service and Profits with AI - Roman Pedan, Kasa

In this episode, Roman Pedan, the Founder and CEO of Kasa, explains how the company uses cutting-edge artificial intelligence (AI) technology to improve the guest experience and drive operational efficiency.

Listeners will learn:

  • Operational Transformation: Understand the specific AI tools Kasa employs, including large language models, to streamline operations, from categorizing guest inquiries to prioritizing urgent requests.
  • Guest Experience Enhancement: Explore how AI contributes to improving the guest experience by providing quick and accurate responses to inquiries, improving overall satisfaction.
  • Behind-the-Scenes Insights: Gain insights into the challenges and successes of implementing AI in hospitality, including the importance of digital communication and structured data.
  • Future of AI in Hospitality: Roman shares his vision for the future of AI in hospitality, focusing on proactive guest service and personalized experiences.

Want to get my summary and actionable insights from each episode delivered to your inbox each day? Subscribe here for free.

Follow Hospitality Daily and join the conversation on
YouTube, LinkedIn, and Instagram.

Music by Clay Bassford of Bespoke Sound: Music Identity Design for Hospitality Brands

Transcript

Josiah: Yesterday, we heard Roman Pedan explain how he and his teams at Kasa have questioned everything when it comes to technology, and it's something that served them well in becoming one of the fastest-growing hotel operators in the world today. In this episode, you'll hear a deep dive into how this approach played out as they thought about using AI. Of course, AI is really hot right now, but I don't want you to just follow trends. I'd like you to hear Roman talk about this today and think about what this could mean for you. 

Josiah: As we're sitting here and recording in, in 2024, artificial intelligence is hot. Everyone's been talking about it for a year or so, even though it's existed, of course, you know, for years before this, but the public consciousness around AI is hot on every level. And so I'm curious: how have you over the past year taken this mentality of listening to guests, what they want, and to your staff and other stakeholders, and then thought about approaching artificial intelligence and your use of that in this business?

Roman: It's almost as if our business was built to harness the power of artificial intelligence with the idea that artificial intelligence would one day be a source of real opportunity. But we sort of lucked into it. I couldn't have predicted that there would be a GPT moment in the last year where all of a sudden, people realized how powerful large language models, which Had been being, had been developed for quite a, you know, for five years prior, at least how powerful they could be. And so much of what we built is like purpose-built to harness it, but unintentionally. And because of that, we are actively doing some really exciting things. Always with the objective function of how do we deliver more profitability while delivering a better and better guest experience. So, profitability, the owner, and a better and better guest experience for the guest. With the most obvious AI, and when I say AI, I'm specifically right now referring to large language models, although there is a wider and more expansive kind of domain. The most obvious is to reduce the cost of the operation by automating certain things. Again, it's always like the Maslow's hierarchy of needs. That's the first step. So over the last year, we've been working with OpenAI through their large language model and integrating that into our communication flow. We did it through a few stages. One, the large language models would categorize every question that a guest messaged us about. So if you text us, about check-in, it would auto-assign a category that this is a check-in question. If you texted us about luggage storage, it would assign luggage storage as a category. Simple. Why that's important is it allows us to monitor across the country which categories create the most issues and prioritize our roadmap on the categories that are the most costly because they create the most questions, That's at the national level, which allows us to almost line by line get more and more efficient because, oh, what's the next thing we work on? Well, it's this category that represents 30% of our costs, whatever category it might be. At a local level, the teams can see, oh, we're constantly in this property getting questions about the HVAC system. They might know that anyway, but this just shows in data how valuable solving that problem at the root cause will be. So first, we categorized. The second thing we did is put an urgency score to the questions. So, certain questions you text us about can be answered in over five minutes, and it's not a problem. Like, where's the nearest coffee shop? Other questions: every second feels like five minutes. I'm locked out of the room, and I need to get into the unit.

Josiah: And I imagine, was there a certain amount of judgment that was required in that where you have to, again, think with this sort of "Hall of Fame host" mentality?

Roman: And we're getting thousands of questions, right? So it's almost like you have to have a really good process in order to identify which ones are urgent and which ones are not urgent. And it's very hard to define that by hand. And our average response time is sub-three minutes. On chat, it's sub-one minute. But that means certain questions might be five minutes, certain questions might be one minute. We want to make sure that the question on lockouts is 30 seconds every time or 20 seconds every time. Very, very fast response. And so, the AI recognizes the urgency level with extraordinary accuracy, and it puts it at the highest of the queue. So, all of a sudden, you text an urgent question, and you get an extremely fast response from us. So, that's stage two. It's the prioritization. The really exciting part is stage three, which is co-pilot. So co-pilot is where we provide all the context of a guest's reservation and the communications with us, as well as context about our property. And the large language model to your question recommends an answer to our team, and our team presses send. Literally, it appears, and they could press send, or they can change it a bit. Why do that two-step versus just auto-responding? The reason we did the two steps is that it allows us to learn where the large language model is delivering a good response versus not a great response. We have an advantage. So why I said we're purpose-built for artificial intelligence, whereas if you take a traditional hotel run by Ambridge or, you know, branded by Marriott, they're not, is because all of our communications are digital. So you need two prerequisites. You need the data about the property. So let's say, Josiah, you're asking, you're staying here, you have a problem with a thermostat. You text Kasa, how does the thermostat work? For the AI to answer the question, it needs to know what the model number of the thermostat is. It needs to be able to look up the thermostat instruction manual. And it also needs to know, it also needs to actually capture your conversation. So you need to have structured data where otherwise it's unstructured data. Unstructured data is just like, there's a TV in the room. Structured data is the TV is a Sony TV with this model number, and here's the, and so it needs structured data. And I'm not picking on Ambridge or Benchmark or Pyramid or all these; they're, you know, good management companies, but they're not set up to structure the data in their properties because they don't need to, and that would be a kind of a waste of their time. The other thing you need is a digital flow, right? If your default way of talking to your guest is going to the front desk, that's an analog flow. It's a nice flow for many forms of hospitality, but AI cannot fit into that flow. You need to first digitize it, then you need the actual structured data. And it's structured data about what's in the It's structured data about you as a guest, every conversation that you had, etc. If you don't have those from the start, bringing your conversations with a guest from analog to digital as an operator and brand is very difficult. It's also really time-consuming to go through every unit. digitize all of the information. You literally have to go through, for Marriott, a million and a half rooms and digitize all the information in the room. So it's a very difficult thing to do. We've done it from the start. And so to answer your question on why not go directly to guests, we can't answer every question. The large language model, maybe there's a huge set of questions that we're able to answer, but some that we can't, and we cannot feel to the guest like we are insufficiently delivering a response. That wouldn't be a Hall of Fame host kind of experience. What we have done, though, is through CoPilot, we have seen, hey, 50, so these are actual stats, a little over 50%, and it's been growing, of guest questions are answered with the CoPilot recommended response without any change. 50%? Over 50%? Over 50%. Then, another 17 or so percent answered with minor changes. And about a third require a significant reworking of the recommended response. That's not good enough to go direct to guess, because you don't know which 50% are good, right? So that's where the categorization is really important. What we looked at then is, okay, are there certain categories where the system is delivering a better response than not? So we found like amenity questions, for example, what time is the gym open until? How do I access the amenities? What amenities exist on site? We were at like a 98% success rate. And it maybe started at 90% and we tweaked it by category, specifically amenity category in this case, and increased to 98%. 90% is a lot. It's a lot. So we go direct to guest amenities. If you text Casa about the time a gym is open, you will, the first response will be an automated response, but you won't feel automated to you. It will feel like any other response that you, it's like, I've seen some like amazing, amazing messages that the, a large language model recommended, but that will go, that will bypass the human. It'll go straight to the guest. However, if you follow up with another question that then is beyond the purview of the large language model based on the category, or even within category, it knows that certain things it can't handle, it then escalates to the guest experience team who will answer your question by hand. And so over time, more and more categories are going direct to guests, and we're looking within category and seeing, hey, this one's at 80%. What are the 20% that it's missing? Why? It's maybe a data problem. It doesn't have enough information. So we add in the data. Maybe it's a context problem. Maybe it's a form of response problem. And so we've been going through, you know, and systematically, and this is hard work that our team is doing, but systematically improving each category to deliver a better and better rate. And so I'm sharing data that, you know, it's 67%. It used to be 40%, not 67% as an example of kind of the improvement that we've seen in just the short set of months. And I get excited about this, so I'm probably being very long-winded, but that's the defense. That's kind of like automation, lower cost, deliver better, faster responses. This obviously lowers costs for the owner because a large set of responses happen at a much lower cost in an automatic way. It also delivers a better experience to the guests because you're getting responses within seconds on uncertain categories. The proactive Hospitality that we're going to be increasingly delivering is kind of where this gets really exciting. Where over time we're using the all the digitized communications you had with us the large language model is summarizing and saying, hey. Josiah really liked the, you know, has these preferences based on previous communications and puts it and puts in our ops teams kind of workflow ways to really delight you behind the scenes when you stay with us. And it's just like a lot of processing that's happening really quickly and allowing us to be hospitality that's rarely seen, but always felt.

Josiah: I love that. Well, I mean, this is, um, here you described this, this is the best of sort of proprietor-led hospitality in the sense of there's a person, I mean, we're sitting here in San Francisco at your Castro property, met Marcus, but you know, very seamless check-in process here, hit the code in the elevator on the way up. There's this note from Marcus, right? And then Marcus, it turned out to be, you know, spending some time up on the roof, doing some work and able to run the business, you know, just from the rooftop there, enjoying this beautiful day. But it seems like this combination of there is a person, there's a point of contact, but Marcus is also supported by all this infrastructure. And I imagine allows him to spend more time doing what only Marcus can do, right? And taking all the other work off his plate.

Roman: 100%. Marcus doesn't have to be stationary behind a front desk. He can be focused on, Hey, this is a 12-room hotel. One, we also complex folks across the city to make sure that properties are effectively sharing resources, but how can I make every single guest stay special while also making sure that, you know, anything that comes up is, is handled.