Real World Serverless with theburningmonk

#26: Serverless chatbots with Gillian Armstrong

August 26, 2020 Yan Cui Season 1 Episode 26
Real World Serverless with theburningmonk
#26: Serverless chatbots with Gillian Armstrong
Show Notes Transcript

You can find Gillian on Twitter as @virtualgill.

Liberty Mutual is hiring, check out their jobs:

Click here to listen to episode #18 where we discussed voice technologies with Aleksandar Simovic.

If you want to learn how to apply the well-architected principles to build production-ready serverless applications, then check out my upcoming workshops at and get 15% OFF with the promo code "yanprs15".

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

This episode is sponsored by ChaosSearch.

Have you heard about ChaosSearch? It’s the fully managed log analytics platform that uses your Amazon S3 storage as the data store! Companies like Armor, HubSpot, Alert Logic and many more are already using ChaosSearch as a critical part of their infrastructure and processing terabytes of log data every day.  Because ChaosSearch uses your Amazon S3 storage, there’s no moving data around, no data retention limits and you can save up to 80% vs other methods of log analysis.  So if you’re sick and tired of your ELK Stack falling over, or having your data retention squeezed by increasing costs, then visit today and join the log analysis revolution!

Opening theme song:
Cheery Monday by Kevin MacLeod

Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today, I'm joined by Gillian from Liberty Mutual. Hi, Gillian, welcome to the show.

Gillian Armstrong: 00:26  

Hi. Thanks for having me.

Yan Cui: 00:29  

So we've known each other for a little while now, for the audience. Can you tell us about yourself and your experience with AWS and serverless?

Gillian Armstrong: 00:37  

Sure. So I've been working with AWS and serverless on several projects, both large scale enterprise and some more exploratory work for about three years now. I'm also an AWS machine learning hero. But big serverless fun, I helped organise Serverless Days Belfast earlier this year, thankfully, before we all went into lockdown when I could still see people in person.

Yan Cui: 01:00  

There was a really good conference. I really enjoyed that. So can you tell us a little bit about Liberty Mutual, you guys are quite a big company. But for audience who haven't heard about Liberty Mutual, can you just talk about high level where you guys do and what is your work there?

Gillian Armstrong: 01:16  

Sure. And so Liberty Mutual is a big fortune 100 insurance company. It's got more than 50,000 employees worldwide, and about more than 4000 of those are in IT. And the company in which I work, Liberty IT, is a wholly owned subsidiary of Liberty Mutual, with offices in Belfast, in Northern Ireland, and in Dublin, Ireland, we have about 600 software engineers focused on delivering world class software installations for Liberty Mutual. So obviously, we're building internally for Liberty Mutual. So obviously, mostly insurance based applications. Although I have also worked on internal employee applications as well. And we are moving, it's very exciting, because even though we're this huge, 100 year old company, we are very much moving towards this serverless first mindset and trying to drive all of our architectural decisions across all of IT. In that mindset, and with the cloud and serverless technologies as far as we can.

Yan Cui: 02:26  

Yeah, that is really good to hear. And you guys certainly have quite a few AWS heroes in your ranks as well. So for you personally, you have done some really interesting work with Chatbots, can you maybe talk us through some of the work you've done in that area?

Gillian Armstrong: 02:42  

Sure. So I know that I said you before that Alexa was sort of my, my gateway into both Chatbots and into serverless. I think way back when Alexa was still pretty new, I got an Alexa device for Christmas. And I plugged it in, my little brother tried to order an Xbox One through it. So, So I did turn off automatic ordering. But I was just really intrigued by this new technology that could sort of understand humans, it could understand what you said, you didn't have to say it in a very specific way. And I just really wanted to know more about it. So it was the Christmas holidays. And I hopped on and did some tutorials. And without really even knowing what I was doing, I wrote my first Lambda on AWS. It was the first time I'd used AWS or written Lambda function. And created my little model so that Alexa could understand me. I was just super excited. And when I got back into the office in January, they were setting up a new team to look at Chatbots. And so I joined that team. And what we built there was an internal digital assistant. So it was for employees, it could answer questions around lots of internal things like the help desk or finance or HR. And it was a huge success, very popular. And we've actually managed to spin it off into its old, its own company called Workgrid which is amazing that we've been able to take those tools that we built for our own employees, and then actually take them out and offer them to other companies as well. And then as spin off from that we have lots of other bits and pieces going on the companies we've got virtual agents in our call centre that have been really successful, really get great people working on lots of that and we've had quite a few of them out giving talks about what they've been doing. So lots of really exciting stuff in the conversational space and they're really.

Yan Cui: 05:00  

You mentioned that you also did quite a lot of work with Lex as well. Can you maybe give us some examples of the application that you build with Lex?

Gillian Armstrong: 05:11  

Sure. I mean, I think the biggest one I worked on was that employee digital assistant, we picked up Amazon Lex when it was in preview. So not long after it was announced at re:Invent a few years back, and we worked through some of the fun times as you, as you well know yourself, if you pick things up in preview, you kind of have to roll with whatever happens and deal with some of the early quirks in any system. But because of that, we were able to sort of give a lot of feedback back to Amazon. And they were able to help us a little bit in exchange for us finding the bugs. And I think we, we walked into it not being sure. None of us had worked in Chatbots apart from a little bit of Alexa development. And we went in quite naive, and we thought it would just be magic. But we learned pretty quickly that you know, your data is so key. As in any machine learning project, which I guess coming as a software developer, rather than I guess from the data science side, you don't actually appreciate how important it is that you have that really great data. And we were building something for an employee that didn't exist. In some of our call centres, we've used Lex, there's virtual agents, but we have all these scripts of people who have phoned in, and the conversations they've had with humans. But whenever we built our employee digital assistant, we didn't have, you know, anything to go on to start with. We were working purely off things like what do people search for on our intranet? What sort of emails do finance get? What sort of questions are being sent to the finance team. And so we had to start to piece together and really to test and learn, and really be able to gather that data and really keep refining and refining those models. And we won't share at the start how far we would be able to push likes, like how many? how many questions could you really put in and still get pretty good results. But we got pretty far. And it's still being built up. And we're getting a little bit more sophisticated with hooking bots together. But I really, I really appreciate the fact that in AWS, there is all these different layers of AI services that you have that top layer, that's kind of your serverless mindset, where you can just use the service, you don't own it, you're not paying for it, if you're not using it. And you can get started right away, to be able to get started right away with some of those services. Lex was our first one. And then if you are working with one of those services, and you find you need something much more custom, you can start working down the layers and build, you know, as far down as you want. If you are hardcore data scientists, you believe right I can build a better ML model and then Amazon can, then you can do that. That's available to you. But if you're a software developer and you want to get started really quickly, things like Lex, like Polly, even Alexa, it's really easy to get started. And actually you can get really great results just using the services. 

Yan Cui: 08:56  

So you talked about sort of refining and polishing the model there. What does that actually look like in terms of day to day wise? What are you actually doing to refining and improving those models?

Gillian Armstrong: 09:09  

Yep. And so when you're building a Chatbot, you're working off a series of what are called intents. And each of those intents are linked to an action. So some sort of fulfilment. So in our case, some of them would be maybe just giving you an answer back. So searching for something and responding with like, here's the document you're looking for. Some of them might actually take actions for you. So if you've locked yourself out of your phone, they might be able to go ahead and unlock it for you. Some of them are a little bit more conversational. But each of those intents are tied to a set of utterances. So those are a set of things that are how a human would phrase asking for that particular thing, but you need to get a large enough set and enough variation in that set to then be able to train the model that can correctly match the intent. Now, I know that sometimes people think that it's, it's some sort of like regex. And you, I had one of our business analysts ask me if we had a spreadsheet in the cloud, if that's how it was working. Not exactly how it was working. But, so although it's not an exact match, you do need enough for ID in there. And you do need enough examples of things people really say. And then you also need to make sure that your intents are not overlapping. So if people would ask for multiple intents in the same way, then it's going to get pretty confusing. If you put the same text and multiple intents in the same way, a human wouldn't be able to work that out, then Chatbots aren't gonna be able to work it out. So you have to be really careful that you are both covering enough phrases, you've got enough training data for each of those intents, but also, that each of your intents training data are different enough from each other, that it's not getting confused, and not missing things. And so a really key thing is being able to take a look at the things that people have asked that Chatbot didn't understand and start to see, okay, are these things we should have understood? And if so, let's get them in, let's retrain the model. Are these new things that we didn't even think of? People are asking for functionality we don't have. In which case, that's that's really great feedback. One of the really nice things about conversational interface is that it's completely open. If people go to your website, and they wish that there was a button that did something, unless they then, you know, work out how to go and ask you for feedback and let you know, you will never know that they looked at the page and went, I think there should have been a button here. Whereas when you have a Chatbot that people are speaking to, or typing into, they're just telling you exactly what they want. They're basically telling you, this is how I thought it would work, or this is what I thought it would do. Now, if it's way off, potentially, you're not doing a good job of describing what your bot can do. But it can be really interesting, because what it can let us do is, if we are finding things that we should have recognised, we can retrain the model. If we're finding things that are new that people want to know, we can add them really quickly into the model and give back a response that lets the user know that we understood them, even if we can't give them the answer yet. So if you say you want to ask about your pay slip, but we can't actually give you an answer for that yet. What we can say back is I'm sorry, I can't answer questions about your payslip yet, as opposed to I'm sorry, I didn't understand, which is a less helpful, a little bit more of a frustrating answer to get for someone. And then in behind the scenes, we can work to get that functionality in place for people. Because the Chatbot architecture, you know, it's a lot more than just Lex. It's a full serverless architecture, all of those individual script bits. So a whole lot of Lambda functions in there. So it's really easy to sort of like put in new bits, the conversation to change things to hook things up differently in behind that model. So we try to be really flexible, we try to be really responsive. And to keep responding to how the users are interacting with the bot. Even whenever you think you've got it really done, you know someone new will come along and say something in a way you didn't expect. Especially we are a worldwide company. How people speak in Belfast is not how they speak in Boston. So it is really important to sort of be aware of the variations, especially as you start to roll out to different places.

Yan Cui: 14:23  

For this virtual assistance, conversation assistance, in this specific business context it might be okay. But Liberty Mutual is a multinational company. You got people working in different countries. Different areas have got their own sort of local names for things potentially, you know, for example, in the UK, we used to call football, which in the US will be called soccer. Do you have to deal with anything like that in terms of local slang or local dialects for name or things and be able to understand that and translate that to a common model?

Gillian Armstrong: 14:56  

Yeah, absolutely. What we mostly try to do is to include all of that variation in the one model. And at the moment, we just have English as the main language. As we start to try and do different languages, that will be a very different thing. But yes variations and high people asked for things, mostly we can, we can get that into the one intent because usually the variations aren't so broad, that they're kind of moving into a completely different set. But yes, if we need to account for soccer or football, and that is definitely something that is easy to miss. If you are sitting at Belfast, you don't always consider the wide variation of different slang in, well, you know what, from Belfast to Dublin, there's different slang, let alone from Belfast to Boston or Seattle or any of the other offices we have people in. So it's definitely something that we're looking for, when reviewing things that didn't get answered by the bot. You see pretty quickly, the different ways people speak. And it is always interesting, because natural language is very tricky. One of the things, so I'm not working directly with our digital assistant at the moment, I'm working on some other natural language problems. And one of the things we're looking at is reading emails, which is a very interesting problem, because you can kind of think of an email as kind of a special case conversational, sort of model. So in a Chatbot, you've got that single turn, where people ask for one thing, and then they get a response. Whereas an email, you know, it's still a conversation, people will start off and they'll say, “Hi there, it's great to see you last week.” And then they’ll ask things, and then they'll say, you know, thank you very much, and they'll sign off their name. So you still have a conversational structure, but you have multiple turns basically without the other person getting a turn. So being able to understand what is being asked in an email is much more complicated than being able to understand what asked in a single turn in a Chatbot. And what we're finding is, there isn't one way of understanding any email in the world, you have to be very aware of the different types of things that are coming in. The different types of grips are coming from, different grips use very specialist language, especially in the insurance industry. So understanding their particular phrases they use to mean certain things is really important. So understanding your domain is really critical.

Yan Cui: 18:13  

So what you mentioned there about the email being a more of a sort of conversation with different terms, different parts to the conversation, and you talked about the Chatbot being just a single turn, you ask something, you get something back. But can you maybe apply similar techniques to Chatbots, so that maybe I can say, ask the Chatbot, who is Gillian Armstrong and then follow up with another question, what is her position in the Liberty Mutual? What project issue is she working on right now? So that reference to her or she being to Gillian, how do you infer and implement that context into the conversation so the Chatbot understands those references?

Gillian Armstrong: 18:56  

With great difficulty. So I think that's still a big challenge. And it's interesting, even in the Alexa space, they're making a lot of great strides in that area. But being able to understand context, don't refer back is still a really hard problem in, in natural language. And it's still something that's still being sold. I know, last year at re:MARS, they announced Alexa conversations. And it was very much being able to understand the context and carry through what you were talking about for and remember things. As far as I know, that's still in preview. And that's a year later. So I know Amazon is still trying to figure out how to scale it and how to make it work. With I guess other people's skills as well as their own. In our space, part of it is, there is an element of machine learning. But on top of that we're doing a lot of real based. So we can infer a lot of context. So for instance, if you call our call centre, and you've called a particular line, and we can make a pretty good guess about why you're calling. If you've called in, and we can see you have an active claim in progress, we can guess you're probably going to ask a question about it. In the employee digital assistant, we know what you've just asked. So if we're holding that context, we can try to infer that it's probable that whatever you're asking next, if we haven't got a direct piece of information that we can probably fill up with one of the things that we already know. But it's a hard problem, because we're still, there's still a combination of some clever machine learning, but also some clever, you know, rules put on top of that. And we're certainly with their emails as well. It's a very similar thing. We're using some of the Amazon AI services for that as well. So we're exploring it using things like Comprehend and Lex and things like that. But there is some rule based on top of that, in trying to break down the email into its like, conversational pieces. So you can sort of get to the most relevant piece and like, what is the important? What is the important part of what's being said here, which is, you know, it is the same in the Chatbot and the natural language space. What is the most important bit of what you're saying? And how do I tie that back to whatever has just come before?

Yan Cui: 21:54  

Gotcha. So I guess the way you're saying is that we can't have a J.A.R.V.I.S. in everyone's home just quite just yet. So in this case, you touched on Alexa and Lex. In terms of building a Chatbot, if I was going to start building a Chatbot today, where would you say, how would you say I should decide when to use Alexa vs Lex and building some of that Chatbot functionalities myself? 

Gillian Armstrong: 22:21  

Sure. It really depends what you want to do with it and what your use cases. So obviously, Alexa is tied to Alexa. So the only way your users are going to be able to interact with you is through an Alexa device or the Alexa app. And that can be really powerful. Because you're getting the whole Alexa ecosystem. It's, if your user is using Alexa, it's already where they are, it's really handy for them. But you are still stuck in that ecosystem. With Lex, Lex is just a natural language understanding service. So you can use it anywhere. But obviously, it's not tied into Alexe, you only get what you build on top of it. So your Chatbot will only do what you create. But you can put it anywhere, you can put it on your website, you can put it in your mobile app, it can be chat only it can have voice. So if you need something that's completely custom to you, and you need to be able to surface it up wherever you want to, then Lex is the right way to go. If you want something that is available on the Alexa platform, then that's the direction you want to go. There is a lot of overlap in the technologies in terms of how you built and there is somewhat portable between the two.

Yan Cui: 23:54  

I spoke with Aleksandar Simovic, I guess, quite a while back now. I think it was episode number 18. We talked about Alexa as well. Where do you see so I guess chat as a user face, where do you see that's going in the future? Do you think that's potentially the dominant user interface we're going to see in the future like we see in the Star Trek. And where do you think, I guess, what are the main challenges for us to get there?

Gillian Armstrong: 25:08  

Sure. Well, I think, I mean, we're in a pandemic world now, where you don't really want to touch things. So I think voice is about to become a lot more important of a modality than it has been previously. Because if you, if you now don't want people to be touching the same things, being able to just speak to something instead, is going to be really powerful. So I think that may push the technology on a little bit more. I think that voice is a really natural way of interacting with technology, it is the primary way that we interact with each other, I think we're still a little uncomfortable with it. And the technology is not perfect yet. So it can be a little bit frustrating for people. I think I expected it to take off a little quicker than it has. But I absolutely believe that it is still going to be a really significant technology and a way of interacting with technology in the future. If you think about things like AR and VR, voice is just a really natural way of interacting with technology in that space where you don't really have real physical touch. And I know in my house, all my lights and heat and everything. I just speak and it happens. I think voice is here. It's on everybody's phone. There's Alexa and Google devices sitting in a huge amount of houses around the world. It is coming. It's not there yet. But there definitely is so many applications. And it's such a powerful technology, we should expect to see it just keep growing and keep being used in more and more places.

Yan Cui: 27:20  

Funny, you mentioned that the whole pandemic situation has potentially pushed this urgency for voice technology forward. The other day, I saw a pilot for something that companies are doing where they're implementing eye tracking for, I think, a device, a screen. Instead of mouse, they track your eye movements. And then you can click by blinking and something like that. And my first thought was that's just crazy. Why not just use voice

Gillian Armstrong: 27:49  

That doesn't sound super convenient.

Yan Cui: 27:54  

This was crazy. Fun, but crazy. Not in a good way. So in this case, what would be some of your, I guess, top AWS wishlist items when it comes to making Alexa a voice Chatbot more accessible or easier to develop?

Gillian Armstrong: 28:16  

Sure, I would really love to see some natural language generation capabilities, especially with one of the technologies Kendra came out not too long ago. And it lets you do natural language searches on your corpus of documents or information you have, but it can't do natural language generation. So we will find the text it thinks is most relevant and bring it back up. But what it can't do is then frame that back as a sentence. In response to your sentence, it can just serve back up what the text is, I would really, really love to see a little bit of natural language generation where you don't have to hard code basically all the responses for your Chatbots. Chatbots can start to be a little bit smarter about assembling their responses to you. And sort of inferring, but it should be I think that would be pretty exciting to see. It is interesting that they announced DeepComposer. I think people are still struggling a little bit with what to do with Amazon DeepComposer. But I think the generative AI is a really fascinating space, especially in natural language. I think there's some very interesting things to watch in that space. I would love to see more serverless AI. We definitely have some of the services that are definitely serverless. There's lots that are not, there's been a little bit more of a push in SageMaker for it to be a little bit more serverless. I would love to see more in that space as well. Because I think there's a lot of potential there. I think those are my big, my big things.

Yan Cui: 30:22  

What do you mean by serverless AI? I've used SageMaker a little bit, kind of understand how it works from a high level. But I'm not quite sure what you mean there, when you said serverless AI?

Gillian Armstrong: 30:36  

And so some of the new things that are still I guess being rolled out are things like, they don't call them serverless notebooks, I don't believe but essentially, they're serverless Jupyter notebooks. So you can create the notebook and you... so Jupyter notebooks obviously are not your primary way of developing applications but very useful for data scientists and exploratory work in machine learning. At the moment, in the previous way, you create your compute, you spin it up, and then you can open some Jupyter notebooks on it. If you want to port them somewhere else, you have to copy them down, spin up new compute, put Jupyter notebooks on that one. Whereas now what they're sort of doing with their new sort of SageMaker, Jupyter notebooks is that you create that notebook, you can share it with other people. And then you put the compute underneath it. And you can change out the compute. So you can spin it down, the notebook stays there. And then if you go, Okay, well, I need a much bigger box to run my training job here. You don't have to now move the Jupyter notebook to different compute, you could put different compute underneath, still, that same notebook. And I really, I really like that sort of thought that it's much faster to be able to use those notebooks. And you don't have to, you know, you're not leaving a server running there, you get used to really use to serverless, you're used to, if you're not using it doesn't cost you anything. Whereas if you forget to turn off the EC2s that you aren't really using, they definitely are costing you something just to run a Jupyter notebooks, not a very cost effective way. So I think seeing things like that where you can do your training, we can do your exploration work, but you're sort of just paying for what you use without you having to sort of do the heavy lifting of remembering to like, shut everything down and spin it all back up again manually would be really great to see. And I'm always a big fan of the AI services as well, especially the ones that are pays you go and are hosting the models for you so you're not paying for hosting as well.

Yan Cui: 33:16  

Gotcha. That makes a lot more sense. That's the last of my questions. Is there anything else that you'd like to tell the listeners while we're here? Any, maybe a personal project that you want to share? Or maybe is Liberty Mutual hiring right now?

Gillian Armstrong: 33:30  

Liberty Mutual is of course always hiring. And if you are in the UK, Ireland, Belfast and Dublin, we always have positions going in our offices there as well. And I suppose in terms of me, come and connect with me on Twitter, I'm @virtualgill, that's virtual-g-i-l-l. And you can come. I always like to hear from people who are interested in serverless and AI. Come and tell me what you're doing. Or you can keep track of what I'm doing. I am most active on Twitter. That's definitely the best place to connect with me.

Yan Cui: 34:13  

Awesome. I'll put those information on the show notes so that anyone can get in touch with you through there. And again, Gillian, thank you so much for taking the time to talk to us today.

Gillian Armstrong: 34:24  

Sure, no problem. It's great to be on and great to chat to you again.

Yan Cui: 34:29  

Yeah, take care and stay safe.

Gillian Armstrong: 34:31  

You too.

Yan Cui: 34:32  

Okay. Bye bye. 

Yan Cui: 34:37  

So that's it for another episode of Real World Serverless. To access the show notes, please go to If you want to learn how to build production ready serverless applications, please check out my upcoming courses at And I'll see you guys next time