The Relicans

Cover image for Users & Hugs – Charging Back The Value with Chris Riley
Mandy Moore
Mandy Moore

Posted on

Users & Hugs – Charging Back The Value with Chris Riley

Jonan Scheffler interviews Splunk Developer Advocate, Chris Riley about DevRel Advocacy, aiming more for HugOps for others when things inevitably go wrong, and bringing security practices earlier in the software delivery chain.

Should you find a burning need to share your thoughts or rants about the show, please spray them at While you're going to all the trouble of shipping us some bytes, please consider taking a moment to let us know what you'd like to hear on the show in the future. Despite the all-caps flaming you will receive in response, please know that we are sincerely interested in your feedback; we aim to appease. Follow us on the Twitters: @ObservyMcObserv.

play pause Observy McObservface

Jonan Scheffler: Hello and welcome back to Observy McObservface, proudly brought to you by New Relic's Developer Relations team, The Relicans. Observy is about observability in something a bit more than the traditional sense. It's often about technology and tools that we use to gain visibility into our systems. But it is also about people because, fundamentally, software is about people. You can think of Observy as something of an observability variety show where we will apply systems thinking and think critically about challenges across our entire industry, and we very much look forward to having you join us. You can find the show notes for this episode along with all of The Relicans podcasts on We're so pleased to have you here this week. Enjoy the show.

Hello and welcome back to Observy McObservface. I am Jonan, and I'm joined today by my friend, Chris Riley. How are you, Chris?

Chris Riley: I'm doing great, Jonan. Well, I did tell you right before we joined that for whatever reason, my afternoon low hit me really hard today. But I got my Iron Man cup and my coffee, so that's going to be all the inspiration I need.

Jonan: I need some extra coffee in here, actually. I saw a picture someone tweeted the other day of a hammock you can hang under your standing desk. And as critical as I am of that leading to a work-life balance that is non-existent, I really like the idea of taking a nap under my desk. It sounds pretty good some days.

Chris: My first thought is that I would hurt myself so bad. [laughter]

Jonan: Spinning around. I'd say one of those little cocoon ones you could just wrap up and hide. You put a stuffy in there when you're not using it, so people think that you're just like --

Chris: Go in the fetal position after a bad meeting. Yeah, that makes sense.

Jonan: [laughs] So I know a little bit about you, but I imagine our listeners don't. Why don't you give us a little background on yourself?

Chris: I am a DevOps Advocate/DevRel at Splunk. Normally how I classify that (and I'm sure you have a lot of experience as well trying to explain this type of role because it is rather unique) is I'm a bad coder turned somebody who could not give up talking about application. So I've always really been obsessed with the application development life cycle, even though I was never really good at doing it myself.

Jonan: I definitely miss the days when I was able to code all the time. I've been really fortunate in my DevRel roles to be pretty engineering-heavy in my career. But now, I have assumed this leadership position on this team, and it is far less code than I would like. I miss it dearly. But we get to build great demos. I mean, from one perspective, being in DevRel is kind of you get to build cool things with whatever technologies you want, but you don't have to maintain them in production or very often at least carry a pager.

Chris: Yeah. Recently, I was carrying a pager just so I could re-live the experience of being an SRE. But you're absolutely right; the accountability is quite a bit less. I think when you say that, what's interesting is…I've had this conversation a lot. Those who have a traditional CS degree oftentimes aren't really set up very well to be good coders, and that was definitely true with my degree. So I get to flex my architecture talent a lot without finding out where a missed semicolon is.

Jonan: Yeah, I do enjoy that angle. Getting to talk at a higher level about software is an interesting place to be. So you work here at Splunk, and you've been doing this DevRel thing for a while. And lately, you've been working on that game you told me about, the O11yQuest Game.

Chris: Yeah, O11yQuest. Yeah, it's a cute RPG single-page app game. And part of the effort...first of all, the game itself was part of the effort, and we had some nice t-shirt giveaways and all this good swag stuff. But another part of it was to make sure that we experienced using observability tooling behind the game to really understand how it all comes together. So we had incident response, we had APM, we had infrastructure monitoring. I found it really fascinating that even for a single-page app, we discovered some really interesting things, namely the relationship between the back end and the front end in latency issues there. But also, we released the game in the U.S. specifically, but we saw global usage. And the latency we found from other regions was tremendous, and so we went down this path of trying to figure that out. And we decided to use some synthetic testing to go from the user experience back into the application. And no big surprise, our images in the game were massive and causing a lot of latency. And then we had some queries on the back end that were causing a lot of latency and spikes in the latency there. And so, synthetics allowed us to discover those and resolve them. So it was a really fun exercise. We did have a few late-night pages and, it's not a fun experience, you know this, right? [laughs]

Jonan: Yeah, especially with those kinds of demos because the time when DevRel people do carry a pager often is when you are at a conference or especially if you're at a major event and then you have people using the same demo on another major event simultaneously. You're trying to manage that demo. You're on the hook. You're fixing the thing. If someone is in the booth with your demo across the country and it goes down, you have minutes to get that thing back online. It is kind of a serious production outage; I guess as you get in DevRel. It's kind of a big deal that you solve that thing quickly because you've got a whole bunch of money invested and time and energy in this booth and the whole demo experience. And then otherwise, people are just standing there handing around t-shirts and thanking people for coming by.

Chris: Yeah. The keyword there was experience because, I think, in modern applications, it's all about service. And we're no longer shipping boxes where the biggest thing you had to be worried about is that it installed correctly, and then the user just had to take what they got. Now, the expectation is we're white-gloving this. And I think what's interesting is that it's like any service. Like, if you have house cleaning, landscaping, any type of service that you consume, you expect some sort of quality in the output, and now we are in the same realm as it relates to applications. And it comes down to experience because latency isn’t the equivalent of being down anymore. [chuckles] It's not just about your site being down. It's, is it performant?

Jonan: Yeah. A lot of companies wisely are now using metrics that focus on that. They're setting their goals as companies based around user experience, the meaningful content load. Or there's a measure now of not did the page load in a certain amount of time, but how long did it take to get someone to something that actually made sense, right?

Chris: Yeah.

Jonan: Because in the case of large social networks, for example, you have an infinite scrolling situation. How long does it take a user to actually see something that feels like they're experiencing the application rather than waiting for that whole page to load? I appreciate that about the industry, that we're headed in that direction because it really should be all about them in the first place, product-led growth as opposed to user-led growth and all of these other methods of marketing that are appearing. And DevRel being tangential, I am loosely familiar with the terms. But it seems to me like there shouldn't be a revelation for people really like, "Oh, you mean we should focus on the humans using the software instead of the software for its own sake?" "Well, yeah, that’s why you exist because of the people in there."

Chris: I'm curious because from your perspective, because we get this question a lot which is end-users...and not end-users of the application, end-users of observability, people instrumenting this and using it asking the question like, "Okay, just tell me what metrics. What are my metrics?" They want the vendor to tell them. And then, of course, you can come back, and you say, "RED, and you got USE." And that's a great starting point, but you have service-level metrics too. And we can't tell you what your service-level metrics are. Do you experience the same thing?

Jonan: Yeah, I've been fortunate in that a lot of the people I end up talking to are around in the conference scene and maybe building a little bit more often in modern software spaces. They're not so much working in legacy codebases or teams where you're selling the dream of observability. It is assumed that you're going to be setting these things up in the first place, and they understand pretty well what they're trying to measure. But I've definitely spoken to the occasional person who's like, "Okay. Well, great. Now I've got all these dashboards, but what should I actually care about?" And we have reasonable defaults. Sometimes we have things like the Apdex score we can point people to and be like, "Well, hey, if this gets real bad, then that's a good indicator." But ultimately, we both know it's much deeper than that. To really achieve observability across your piece, you need to know what you're trying to achieve pretty clearly. So this actually leads me to this point that I wanted to talk to you a little bit about with the monitoring and observability. People, even if we get them on board with our products and we get them to understand the reasons that they want observability, actually getting buy-in for those teams and the rest of the organization where people are working to get this implemented is a pretty significant hurdle. There's a long series of hurdles to overcome to implement observability across. And it's not just understanding the solutions that exist and how to use them, but it's also convincing your co-workers, and the leadership, and the rest of the company that this is a meaningful exercise. How do you advise that people go about that?

Chris: Yeah, this one's really fun. And I've always been fascinated, or not always, in the last three years, I've been really fascinated with trying to take this fluffy stuff we've talked about around culture, and stewardship, and two-pizza teams, and all of that stuff and try to maybe bring it to the business level because it's very, very hard. I don't think we're any longer having the debate with enterprises about whether culture matters or not. Culture exists whether you create it. And you can have a good culture, and you could have a bad culture. And you can be deliberate about it or not. But it's still very hard to change culture, especially if you're not a decision-maker. So this new method that I've seen start to work I'm a big fan of. So if you're part of a monitoring team or SRE team, if you start to treat your practice and the tooling that you offer, so your customers are developers and SREs and DevOps engineers, and if you start to think of it as a product and more of a product management activity, then a lot of really interesting things happen, and especially if you're in an enterprise where you actually have chargeback because well, some large enterprises get paid for the services that they offer the rest of the organization. What you can do to use it as a forcing function, and there are a few enterprises out there that do this really well, like, Publishing is one, is to give two different versions of your product. So the first version of your monitoring or your observability product is a login. You get a login. You're on your own, but we give you a login. Unfortunately, developers or the vendors have invested a lot of energy in great user experience, so onboarding new data, getting in in the product, understanding how to use it. There's a lot a user can do on their own. But at the same time, it gets to that element where are they doing the right things? Are they measuring the right things? Are they visualizing in the right way? Are they communicating in the right way? And you can easily get lost.

So the second version of the product is you get white-gloved. You get white-glove support. So as the monitoring team, we come in. We help you with your dashboards. We do a service onboarding checklist. So before you onboard a new service, we make sure do you have an on-call schedule set up? Are you measuring the right data at the service level? Maybe you're doing contract testing and so forth. Are you making sure that you're using the right artifacts and you have all the things that you need to make sure that you're using the right artifacts? And everything needed to make sure that when a new service is brought into production, it is secure, stable and providing the best possible quality for the end-users. The price of getting the version two of your product observability solution is to follow best practices. So basically, you go in, and you're like, "Well, we really want you to use this CI/CD pipeline. We really want you to use these practices for how you do your testing, et cetera." So what you're doing is you're creating some of this kind of peer pressure (and I know it sounds awful to say) where you have teams who choose product number one where they get a login and get lost quickly. And then you have teams who follow the best practice, and they get version number two, and they're a lot more successful. One of the things that seems to happen a lot is that the people who are responsible for giving the observability tooling end up becoming dashboard monkeys, and that's not what their job should be. Just like your job shouldn't be reading email. You shouldn't be a dashboard monkey. You should be establishing good strategies for on-call and building dashboards and so forth. And a way to do that is to channel people down the path that is the happy path for being successful with observability. So I like this approach. I've seen it used a few times, and I think it's super interesting.

Jonan: So you present your services this is an interesting concept to me that you actually get paid for providing services to other teams within your company.

Chris: Chargeback.

Jonan: Like, additional budget onto your organization.

Chris: Yeah. So basically, the chargeback model is fun because it's good and bad. You want people to use observability in their practice, and so chargeback can give you a reason not to. But if you get past that hurdle, and hopefully your team is wise enough to understand that without end-to-end visibility across the entire stack, they're shooting themselves in the foot; the nice thing about chargeback is it makes people take it serious. [laughs] And if part of your budget is going to these services that you're consuming, you're going to take it a lot more seriously. And you're going to think about instrumentation, which is something that people just get wrong so much. They just expect the agent to collect the best possible data for them without thinking about well. Are you doing it consistently? Are developers instrumenting consistently? Are you deploying your agents consistently? All of that stuff. So yeah, when enterprises are using chargeback, it's kind of fun.

Jonan: I feel like there are a lot of people who don't actually get the instrumentation right. It's never perfect on the first go. It's all iterative. But there are definitely people who implement this thing as though they're putting out fires like, "Oh, we had an incident around this portion of the code. This is a high churn piece of code. Let's get some more instrumentation in there.” And then they wait for the next incident, and then they go back in afterward. And this, of course, runs counter to the idea of observability that we actually have all of the things that we need to properly diagnose the issue in the moment as things arise. We actually can understand the systems, have full visibility into all of the components of the system at the moment that we're going to need them, and not be coming in after the fact or certainly not trying to debug these things in production systems as things are going down.

Chris: So that's why I'm really excited about OpenTelemetry. I'm hopeful there that we start to break this habit, although you can implement OpenTelemetry in code even if you don't use some of the auto instrumentation stuff. And that makes me nervous because one of the powers of observability is when you use tags and spans. And if developers aren't doing this consistently, like you said, it’s an information architecture problem that no technology will ever solve for you, but at least you can democratize your agents so that the agent doesn't become necessarily the forcing function. And you don't have to worry so much about the consistency there.

Jonan: I think we'll start to see a lot of frameworks adopt OpenTelemetry out of the box, much like a lot of the DevOps tooling stuff uses Prometheus endpoints to supply metrics now. I really like the idea that we are moving away from these walled gardens and opening things up so that I get to choose the tools that I feel are the best for my needs in this moment. And I can trust that they will always work well together because they're all using these open standards that exist. So the most recent incident that I saw that I wanted to talk to you a little bit about is this Fastly outage that came down last week. I imagine you all were affected by that to some degree. Did you use Fastly at all?

Chris: Shockingly, I didn't notice it, and I don't know why. So when something similar happened at Cloudflare, it was a completely different type of incident with Cloudflare. I noticed it in a lot of services I use. It was Trello; I think it was Slack. It was a whole bunch of stuff.

Jonan: Everywhere, right?

Chris: Yeah.

Jonan: Everything was down when Fastly went down.

Chris: Yeah. And that's always an awkward experience when you're trying to get stuff done from a productivity standpoint [laughter] because the immediate next thought is like, oh, well, obviously it's me, it must be my router. It can't be a CDN or something of that sort. It's got to be us. This incident was super interesting. One thing I will say is that the MTTR there was respectable. It was what? 40 minutes, 49 minutes.

Jonan: Something like that. It was less than an hour that they were down, right?

Chris: Yeah. That was fairly impressive. My read was that there was a recent feature that was released that had a bug. I don't know 100% if that is correct. But the bug allowed configuration settings changes in one account to basically bring the entire system down. [chuckles] I laugh. It's not fair to laugh because these things happen. They always happen. There's no way to avoid it. But I do find that really interesting. I respect the MTTR, which means that they probably did a rollback of some sort. And just the scale of it, I mean, besides DNS and your CDNs and so forth, it's the equivalent of your cloud provider going down. It's tremendous.

Jonan: I think a lot of people had responses across the spectrum. But most of the people I know being pretty experienced in the industry and probably having been that person themselves who shipped the bug; in this case, you could say it was user error. But the user was making a perfectly valid change. It just happened that they were the first one to encounter this bug that already existed because they had made some change to their configuration, and the bug existed and brought everything down. But in any case, people were either extremely critical of Fastly for ever having had any downtime. You see people on Twitter replying like, "I just shorted all your stock." And I'm pretty sure that guy lost a lot of money because they went up, I think 12% or 15% in the next hour after they came back online. And then you have people who are sending HugOps. You can actually find the downtime on the internet by searching on the HugOps hashtag, figure out who's trying to hug who virtually. Do you feel as though you land in one of those camps? Where would you land?

Chris: I absolutely land in the virtual hugs camp. This is and will happen to every SaaS provider. It's just going to happen. And I know users are...look, the applications we deliver are a reflection of the engineering team behind it, but so is your response. And so far, I find their response to be fairly respectable. I've seen better. I've seen far worse, people who passed the buck, far worse, versus people who say, "Yeah, something bad happened. This is what happened, and we fixed it." So I'm completely empathetic of the situation. I think also what's fascinating about it is in the early days of hosting, there was a lot of sharing of infrastructure, and you would have the same thing happen. You would be hosting a WordPress site, and it was shared on a server with five other WordPress sites. And somebody would do something crazy like update a plugin because never do that on WordPress, and it would peg the server, and your site would go down. And you'd reach out to support, and you're like, "Why did this happen?" And they go, "Oh yeah. Another site on the server is consuming resources, and we just killed it." It's funny to me that some of these problems, the nature of them, change, but they never go away. We're still addressing similar issues that we've always been addressing. And I think kudos to Fastly, who has a very high blast radius when something goes wrong, how they addressed it. I would like to see more details, though.

Jonan: Yeah. I was going to applaud their transparency because I feel like during the incident, they were pretty communicative. But then I want to see that retrospective a little bit. I want to see the blog post about like, "Hey, here's the technical failing." I want them to help the industry learn from their error here. And I get that when you reach a certain size, it becomes more difficult to get that level of transparency. But it would, I think, help them out from the perspective of their customers just because it inspires trust in me to at least see that they understand the technical failure that happened and that it wasn't like a, "Well, what we did is we deleted all those nodes, and then we built them again, and then we fixed it." [laughs] It's like, “Well, okay,” because I have no guarantee actually that's not going to happen again next month. That's scary.

Chris: Yeah. You just reminded me of something; this is where chaos engineering is super fascinating as well, and just having a culture of resilience. But again, you're never 100% kind's all risk mitigation. You're not ever going to fully prevent outages, ever.

Jonan: I wish it was easier to get the rest of the world to understand that thing; computers are, I guess, we say, deterministic. You put a thing in, and you get a thing out. That is how computers work, but also there are a lot of things going in and out of them all the time.

Chris: Oh yeah.

Jonan: And to some degree, there's chaos in the system, and there's nothing you can do to extract it. You can embrace it. You use chaos engineering to simulate it and then be able to make it a little more predictable. But it is absolutely risk mitigation all of the time. It just happens that Fastly controls a lot of the internet services that we all know and love, which actually is to their credit. I think, if anything, that share price spike indicates that people realized Fastly is actually doing a lot of good work out there. There are a lot of people who trust them to keep their services online in part because they're able to give us 40-minute turnarounds on significant issues within their configuration. But I want to know how bad it was. Make yourself look like heroes. Tell us what you actually achieved in that 45 minutes.

Chris: [laughs] That is a call out to Fastly. Let us know.

Jonan: Get on that, Fastly.

Chris: From a scientific perspective, understanding that would be really useful. I don't know how to solve the demanding user’s problem. I know computer rage and people like my wife and my parents and so forth. There's no way for me to even broach the conversation and be like, "Well, hold on, there's a lot going on here." [laughter]

Jonan: I feel like it reminds me a little bit of driving with my wife and how she'll take any stranger's side over mine in traffic, anytime.

Chris: [laughs]

Jonan: I'll be like, "That person cut me off, and I want to yell at them." And she's like, "They're probably having a bad day." That's how my mother would feel; I think receiving similar feedback. I'm like, "Look, mom, keeping servers online is hard." "I don't care. I want my phone to work right now, you know?" Yeah, I get it. I understand why people get frustrated with these things. But in that moment of crisis, you want to show up to support your friends in the industry. Please aim more for HugOps rather than terrorize them by any communication means you can find, looking up people on LinkedIn and sending them LinkedIn messages, hate mail. So this has been a wandering conversation, but I do actually enjoy these kinds. This is exactly what I was hoping we would achieve today. So thank you very much. I do want to ask you a couple of questions that have become something of a tradition on the show. The first one is where you get to make a prediction about what we're going to see trend-wise in our industry observability or otherwise over the next year or so with the goal of me being able to bring you back on the show eventually and to accuse you of having lied to us. But what do you think is going to come for our industry over the next year or two?

Chris: You know, I don't like this one because I used to be really good at this. And I said by now, serverless was just going to dominate, and it hasn't yet. [chuckles] We certainly see a lot of amazing adoption, but I really thought everybody was going to be 100% serverless by now. So I think that we are getting really boring in the world of DevOps as in the innovation is there, and it's fascinating, and it's cool. But I think that people are still solving real-world problems. And I use this term tech-enabled enterprises, and the people who work in engineering teams at tech-enabled enterprises definitely have the spirit, the motivation for DevOps, SRE, DevSecOps, but they haven't been able to execute. And so when I say boring, I think we're just going to get really nitty-gritty and serious about the execution of this. The other part of that is security, security, security. It's like healthcare. Bringing security practices earlier in the software delivery chain, I think, is going to continually be a big initiative because people are terrified of being in the news for vulnerabilities.

Jonan: That's, I think, a very good prediction. That's one you can stand by. The serverless thing I will admit as well I thought was going to grow more quickly. It is gaining traction, but I think the world showed up, and they were like, “Ta-da, we have this serverless thing. Like, look, we have freed your application layer, and then you have the actual debugging process of working with those services.” In the beginning, it was like, “Wow, I want to rip my eyes out of my head. And I'm pasting my code into a GUI that I wait to load in some management console. This is your alternative to me. I'm going to go back to the command line and hack away for a while. Talk to me in a couple of years.” But those tools are catching up. It's becoming much more usable. And I do think that overall that approach does make sense. The decoupling of the application runtime layer from the data storage and all of these pieces coming into their own components gives us so much more flexibility, much like Twelve-Factor apps did back in the day when you have these disposable horizontally scaled application infrastructures instead of just continuing to buy more and more memory and depending on Moore's law, to make sure that you could buy the biggest monster server ever to run your one application to serve the whole planet. So another question I like to ask people is what advice you might have for someone who's listening today and aims to be where you are today, someone who may be just entering the industry or even advice for your past self, if you were earlier on in your career?

Chris: Oh geez. The past self one is interesting, and I've been asked it before, so I think I have a response, but I'll start with the thing I will say if you're an engineer and you found that you really have an aptitude for writing content, and producing content, and really enjoy explaining technology, DevRel Advocacy is an awesome path. It can be one of the most rewarding, fun jobs. My first response to you, if I were to mentor you on this, would be don't do it. [laughter] And that would just be a jumping-off point to ask the questions of why? What is the reason for it? And, actually, this is a solicitation. If anybody's interested in having that conversation, I'm more than happy to have it. But the reason I go there is because it may sound a little bit more glamorous than it actually is, but it can also be extremely rewarding. And so I think the outside perception of this particular role…Now, in engineering in general, if you're an engineer and you want to grow your career, think about the business. I know it's not fun. But you have to think about how what you're building, the feature and functionality you're building, ties to the business. And that is only going to help you. It's going to help you justify buying that next amazing observability tool that you want, buying a testing service. Whatever it is, you have to think about the business, and you have to think about how to communicate that. My previous self, I'd just say, "Chill out, man." I think I was too high-strung. I still am. [laughs] But I would tell myself just to calm down a little bit and don't think that everything has a lifespan of six months because I think that I always expected if I didn't jump on something today, it was going to be gone in two weeks, and the reality of that is very different things. Things evolve. It's not just a matter of one practice being the end all be all; it matures, it changes. And so I think that would be my advice.

Jonan: These are both really good pieces of advice. I think a lot of people today are looking into getting into DevRel. Actually, I know they are. During the pandemic, we saw an explosion in people, in jobs, and an interest in getting into developer relations. And I'm frankly a little concerned for everyone who's jumping into this thing and being like, "Oh, yeah. It's like going to conferences more. That sounds fun. I like conferences." We're like, "Okay, you don't like conferences the way DevRel people do them, I promise." It's like 18 hours a day, all the stress in the world carrying a pager for two different conferences running simultaneously. Yeah, it's a lot. But the bit I think about calming down earlier on in your career, I could have taken that one as well. That's good advice. I appreciate that one because I felt a lot like I needed to stay on the edge of everything. It feels like when I was first starting out maybe 10, 12 years ago in tech and choosing the language that I was going to use, and I wanted to make sure I got the right language. If I'm going to study things, it's got to be the right one. Of course, at this point in your career, everyone realizes as well it doesn't really matter. Like, if I really put my head down for a couple of months, I could learn almost anything that I need to learn. I wish we could just get that across to the recruiting teams who are putting 18 years of Swift experience on all the resumes.

Chris: Yeah, recruiting in tech is such a huge conversation.

Jonan: [laughs] That's a whole nother podcast. Yup. All right. Well, it's a pleasure and a privilege to have you on the show. Thank you so much for coming, Chris. And if people wanted to find you on the internet, where would they look you up?

Chris: Well, first of all, absolutely thank you, Jonan, for having me on. This has been a lot of fun. I'm on Twitter, the Twitter thing as @HoardingInfo; fun backstory to that. I also have two interesting podcasts; one is a personal podcast, Developers Eating the World. You can find it on And then Tech A'Sketch, which is on YouTube, which is really fun where we have graphic recording as we talk about mostly around career development stuff because I'm very interested in that. But I would invite people to reach out. I love, like you, I love just having a conversation. You learn a lot just by chatting with other techies and get all the nuances of what goes on in the market. So LinkedIn, I live on LinkedIn probably more than I care to.

Jonan: [laughs]

Chris: LinkedIn's a good place as well.

Jonan: Excellent. All right. Well, I hope that we get to talk in the next year or so and revisit your predictions, but until then, take good care of yourself, Chris. Thanks for coming on the show.

Chris: Thanks.

Jonan: Thank you so much for joining us. We really appreciate it. You can find the show notes for this episode along with all of the rest of The Relicans podcasts on In fact, most anything The Relicans get up to online will be on that site. We'll see you next week. Take care.

Discussion (0)