The Relicans

loading...
Cover image for Don’t Panic – Blackholes and a Culture of Curiosity with Liz Fong-Jones

Don’t Panic – Blackholes and a Culture of Curiosity with Liz Fong-Jones

mandymoore profile image Mandy Moore ・25 min read

Principal Developer Advocate for SRE & Observability at Honeycomb, Liz Fong-Jones, talks about common questions she regularly gets from companies and clients looking to implement observability into their workflow and defines observability as a mechanism in order to improve the operability of systems.

Liz says we shouldn’t talk about observability in a vacuum and that instead, it's a technique for analyzing production data that goes hand in hand with other production techniques, team philosophies, and development methodologies.

Should you find a burning need to share your thoughts or rants about the show, please spray them at devrel@newrelic.com. While you’re going to all the trouble of shipping us some bytes, please consider taking a moment to let us know what you’d like to hear on the show in the future. Despite the all-caps flaming you will receive in response, please know that we are sincerely interested in your feedback; we aim to appease. Follow us on the Twitters: @ObservyMcObserv.

play pause Observy McObservface

Jonan Scheffler: Hello and welcome back to Observy McObservface, proudly brought to you by New Relic's developer relations team, The Relicans. Observy is about observability in something a bit more than the traditional sense. It's often about technology and tools that we use to gain visibility into our systems. But it is also about people because, fundamentally, software is about people. You can think of Observy as something of an observability variety show where we will apply systems thinking and think critically about challenges across our entire industry. And we very much look forward to having you join us. You can find the show notes for this episode along with all of The Relicans podcasts on developer.newrelic.com/podcasts. We're so pleased to have you here this week. Enjoy the show.

Welcome back. I am joined today by my guest, Liz Fong-Jones. How are you, Liz?

Liz Fong-Jones: Hi, I'm doing great, well, as great as anyone can be during COVID.

Jonan: I know. It's been a long pandemic, to be sure. I'm hanging in there, though, and I think, in general, my life is getting better. Maybe I'm settling into the groove. I don't know. You're doing well, though.

Liz: I'm doing all right. I just moved into a new place that I'm getting to stay in for a year as opposed to an Airbnb for four months.

Jonan: Oh my gosh. You were in an Airbnb before this place?

Liz: I was in an Airbnb for four months.

Jonan: Wow. Is it because you were traveling beforehand? Were you one of the nomads, and then you had to settle down quickly?

Liz: No, but I moved from New York to Vancouver, BC, before the election, and then I needed some time to find a permanent place. But yes, now I am spreading the word about observability from Vancouver, BC, and from a place that I can actually stay in for a while.

Jonan: Really nice to feel settled. I have been in a similar situation. The nomad piece that I was referring to, you and I have both been in DevRel for some time. There are some people in DevRel who don't live anywhere. They just "I fly around, go wherever I go." But now we don't anymore. We don't fly, and many of us are trying to figure out ways to do this whole developer engagement thing remotely. How's that going for you?

Liz: Well, podcasts are definitely a great place. But also one of the things that I've found really cool is I used to talk to mostly the same people at conferences all over the world and especially people who could afford to buy a conference ticket and get two days off of work. And now I have open-door office hours that anyone in the world can just walk up to and book, and they will get 30 minutes on my calendar. And I have an hour every day dedicated to that. It doesn't always get booked up, but it's really remarkable to see the range of people who come in and ask you what they think are quote, unquote, 'stupid questions.’ It turns out there's no such thing as a stupid question. And that's really, really informative for them in terms of what they hear from me and also in terms of for me to get the temperature of people who are not necessarily following the observability landscape quite closely or from people who are not necessarily going to every developer experience conference.

Jonan: Yes, this is a wonderful idea. I had been planning to do something once a week for an hour, which was -- I get a lot of requests for coffee chats because I came through a code school. A lot of code school students want someone to give them some advice. They're like, "Hey, I keep mailing my resume to every open position I see, and no one ever replies." And I'm like, "Well, do you have any friends there? Did you talk to anyone? You can't just pepper the world in resumes." And so I try and give them this advice, but it gets repetitive after a while. It's the same advice. I should just be giving it to everyone on mass, and it occurs to me that I could host a one-hour-a-week-stream where people could just drop in and have a Q & A thing, but maybe I'll do it every day.

Liz: Yeah, definitely. You have to figure out what's right for one-to-one and what's right for one to many. And I think for me, that threshold is when I see the same question pop up over and over and over again, then I wind up writing a blog about it. But right now, I think a lot of the questions that I see about observability are things where there is no one universal recipe right now. And I think that the challenge for us as observability practitioners and advocates over the next year or two is how do we systematize this so that you can actually give people a flow chart they can follow of what do I do next, then what do I do next, then what do I do next?

Jonan: They need to be able to have that decision tree. So what are some of these questions? Can you give me an example of a question that you get commonly?

Liz: Yeah. One of the most common questions that I get is 'I am excited about observability. I read all of your tweets or Charity's tweets, but I don't know how to get this started in my company.' And there, the answer is it depends heavily. Are you referring to back end or front end service? Are you working on a product that is a monolith, or are you a monolith with a microservice kind of golden thread wrapped around it? Or are you a pure microservices environment? And then it just goes and goes and goes from there. Like, what are your preferred languages? How important is trace header compatibility? All of these things go into trying to figure out what the optimal path is for someone, and that can be really challenging. And I understand why it's overwhelming to people especially because our ideas of what the best thing to do is are not constant. You write it down, and it becomes instantly out of date.

Jonan: And are you seeing people come in with nothing implemented so far. People have something very often. They've got something to do some monitoring in.

Liz: Yeah, often people have APM. And the question is, how do you take your Java monolith APM or your Ruby APM? How do you take that and evolve it into a microservices world? That is a very, very common question. And a lot of it depends not even on technical choices but organizational dynamics. Who's doing the microservice conversion? Why? What's the business motive? And that determines how you approach the instrumentation problem, how you approach the operability, and who you train and teach to use the new observability system first. Those are all the considerations that -- I spend easily 10 minutes at the beginning of every 30-minute call just collecting this information to try to sift it through figure out which buckets it sorts into and then figuring out okay, what if we did X? And often the answer is "Yes, I've tried that already." And I'm like, "Right. Okay, great. How did it go?"

Jonan: And then you let them guide the conversation from there. And so you start on understanding what they have, and their system, and what they're trying to achieve, their goals, and then you get -- But I think that we all have in common this goal of observability. And we also all seem to have our own definition of what that is to some degree. I wonder if you have thoughts on that. What is your definition of observability?

Liz: So I think that observability is a mechanism in order to improve the operability of systems. I don't think observability should be a goal in and of itself, right?

Jonan: Yeah.

Liz: So with that in mind, observability is the capability of can we understand what's happening inside of our systems without needing to retune and permute them too much? Do I need to push new code in order to understand to production? Do I need to add instrumentation to answer a new question I have? Oh, that's going to take two weeks to get the next code fleshed. Now I'm stuck developing something I can't give up. So I think that that's kind of the capability that we're talking about, but the business outcome that we're driving towards is, are developers more productive? Are customers happier because they're having a better quality of service? Those are the indicators that we're trying to drive. And how we do, it can be through observability, but it also turns out you can achieve some of that through continuous integration and delivery. You can achieve some of that through chaos or continuous verification. You don't really talk about observability in a vacuum. Instead, it's kind of one technique for analyzing production data that goes hand in hand with other production techniques and other team philosophies and development methodologies.

Jonan: That is a good definition of observability. I approve.

Liz: It's long-winded. It takes four paragraphs.

Jonan: Yeah, it's a lot to say.

Liz: So we need to understand which endpoints are actually essential and which ones aren't. What things actually reflect good or bad user experience? And make sure that we are measuring them and understanding what our error budgets are; that’s the first piece, which the Google SRE book, among many other books, had started popularizing. But I think that the second piece of that is where observability spots in which is once you have one of those SLOs where you're alerting based off of where users are in pain, how do you trace that down to figure out where in the system is it broken and how do I restore services quickly as possible? And I think that's where observability comes into play. And yes, there are things that potentially a machine learning system can help with, but I don't think that any automated root cause detection. Yes, that appears in people's marketing literature, grumble, grumble... I think that when you have any kind of automated detection, it is not really doing a good job because it's either pointing out to you things that you should have automated two years ago, or it is generating more noise than it is solving because, from the perspective of any Ops tool, any change is considered anomalous. But it turns out that if you're trying to deploy code a dozen times per day, each one of those changes is, in fact, anomalous in and of itself. So I think that to me, the better way to approach things is to empower people to do more, to empower people to better get a grasp on what's happening inside of the system than it is to deprive them of autonomy by taking away and abstracting those layers of sales. So I think that's the difference. Do you want Skynet waking you up at 2:00 a.m. because Skynet thought that something was amiss in the system, or would you rather have a mecha suit and go stomping around with a mecha suit that will help you leap taller than buildings until you get to the right place? And I think that those are two very different attitudes towards what role developers and operators should have.

Jonan: So the idea that -- I'm not talking necessarily about taking away functionality from people. I'm not saying, well, we need to swipe all of the alerts off of your plate, or we're not going to give you as much information, and we're going to just limit what you have access to. But I think that in the future, it will become necessary for us to trim the pieces that are surfaced. And in that vein then, do you think that there are changes coming for us as an industry generally and not so much -- I agree with all of the things that you were saying, to be clear, but I'm trying to guess at what the next steps for observability might be. What trends do you expect to see across all of the companies that work in this space?

Liz: I think that trend is how do we move closer to having an almost lab notebook? How do we move closer towards people being able to document what they're doing and why? I think that is the workflow piece. We need to stop thinking about observability tools as I used this to debug problem Y. But instead to start thinking a little bit more like scientists, people who write down what they are doing, why they are doing it to refer back to those notes to establish hypotheses. To treat things much more of as a scientific discipline, I think that is the direction that flows out of this. When it comes to decluttering, you need help decluttering the more signals that you have. But I think that when you get down to those last three or four signals, which all could potentially be valid explanations, I think that's where it's especially important for us to write down: Which one was it actually? Why? To provide that feedback and signal both for training of systems but also for training of our colleagues so that you don't forget it yourselves. You don't forget it. So, yeah, that's why I have done a lot of work with companies like Blameless. These new companies that are focused on incident response workflow automation and incident analysis. How do we help people characterize what they're learning rather than just say the lessons, and we're just going to make you live through Groundhog Day again and again, and again, and again?

Jonan: Yeah. So then, rather than just repeating the same thing that we've done, to take notes and be able to modify our processes and our teams. And this, I think, is one of my frustrations in software generally is the post-mortem, things that are called post-mortems.

Liz: [Chuckles]

Jonan: That is just like, well, it's dead now. Let's all talk about why it's dead as opposed to a retrospective, which gives us an opportunity to look back and learn. And it's not about blame or finding the person who did the thing wrong because the bottom line is that person had the ability to do the thing wrong because some other piece of our process was broken. Is it necessary that I, as a developer, SSH into production and was able to be in the production database and draw up that table? No. That shouldn't be a thing that I needed to do. So let's solve that. Let's go back and look at our process and find the pieces that led us there. So this notebook thing makes sense.

Liz: Yeah. And also, relevantly to observability, how do we surface? These are the areas where it's easy to get lost in red herrings. How do we refine the processes by which we discover and remediate these outages? I think it is super, super interesting, and not just the straight-line path from this graph, smoking gun, this graph, done. That hides a lot of the discovery and exploration that goes into how we look at incidents.

Jonan: Interesting. So you've mentioned getting lost in red herrings. Maybe elaborate on that a little bit. Let's assume that our listeners are less familiar directly with how those incidents would be in the moment -- What does that look like?

Liz: So I think that if you have an alarm that goes off saying that something is wrong in production, there are two paths that this can follow, number one, the good path. You have service-level objectives. This is actually a true alarm. I'm going to focus on that in a second, but I feel I'd be doing you all a disservice by saying it is how it is and everyone is in that good, happy path. There's also the sad path, which is you get paged three or four times every single night for things that don't actually matter. So I think that, unfortunately, there are a lot of people in that world. So first of all, you have to, in that case, you have to validate is this a real alarm or not? And unfortunately, a lot of times, you find out that it's not a true alarm, but then you don't actually shut it off. It pages the next night or the next night after that, or you're afraid to shut it off. So I think that's one way in which we need to examine every time a person is paged. Why did this happen? How do we prevent it from happening again? But let's go back to the happier path, which is this is a real incident, and we were woken up by it correctly. So then we have to figure out why did this happen? And more importantly, what can I do to remediate as quickly as possible, even before I know what's happening? Can I quickly validate, for instance, and say, oh, it's a single availability on outage, or it's a single region outage. Can I failover to a different region? Can I find the one customer that is sending us gobs of traffic and black hole that traffic to protect the rest of the users? Those quick remediations are the thing that we need to do first. But going to those cognitive patterns, going to the red herrings, a lot of people jump all too quickly towards I need to figure out what happened; I need to find the smoking gun, and it turns out that that's a trap. Our job is to remediate as quickly as possible, not to find them the root cause as quickly as possible.

Jonan: Yes.

Liz: So I think that's cognitive trap number one. Cognitive trap number two is you start digging into what you think is going wrong, and often, if you are not well-advised by your software, you tend to rely on cognitive habits things like the last time that I saw MySQL CPU go up, it was caused by a developer adding a new time-consuming query. So you'll spend a long time chasing your past biases; not recognizing the next incident might not look anything like the current incident. So I think that that's another rabbit hole you can go down. And you have to have that escape hatch. If you spend more than 5 or 10 minutes digging into something, back up a step and say, wait a second, does this make any sense?

Jonan: I like this escape hatches concept.

Liz: So there's this really, really interesting retrospectives that we wrote for an incident that I worked maybe a year and a half ago in which I had falsely come to the conclusion that the problem causing us to serve HTTP 500 to users was that Amazon load balancers were malfunctioning and the health checking algorithm was wrong. And I kept chasing for things that validated that hypothesis rather than looking for things that disproved that hypothesis. And it turns out that what was happening was every single back end was out of memory at the same time. And that's why the Amazon load balancer was not able to connect to our back ends. Hindsight is very 20/20. I say that now, and of course, it's obvious, but it wasn't at the time. That discovery process that's what I think about when I think about how does observability help you? How do you enable people to discard red herrings as quickly as they can and to progressively iterate and make progress on narrowing things down until they figure out how do I remediate? Or how can I figure out what's going on so that I can roll back, revert, deploy effects whatever needs to happen?

Jonan: There's a lot here. Okay. So there are so many things I want to talk about that you just said. This is fascinating. The getting to green part that I am talking about -- I've made a note here, getting to green, and talking about that in the moment, what matters is not actually what went wrong. In the moment, what matters is that your users are in pain and if you can remediate that by some other means, if this feature is breaking the entire site, well, turn the feature off. And then the rest of the site can load, and they can't change their avatar photo. They'll be fine, but they'll at least have a website that loads in the meantime while you find a deeper issue, and this is a simplified example but --

Liz: Yeah, feature flags are amazing; the ability to quarantine individual users is amazing. The ability to failover between data centers or availability zones, like super, super powerful stuff and a lot of people neglect that and overinvest in investigative capability or alerting capability rather than remediation capability.

Jonan: And then there was the piece about blackholing the user's traffic around. You’ve solved it by dumping this one user out of the system with a quarantine, but maybe you're wrong. You go down this rabbit hole. You're seeking this thing that turns out not to be the case, but you want to timebox it effectively. You're saying give it five minutes. What can you build into your systems to have those checks? You're with a response team. You have a robot in your chat that reminds you, hey, you’ve been trying to do this thing for five minutes, time to stop unless you have a pretty good indication that you're getting there.

Liz: We don't currently, and I think that that's kind of where ChatOps related things can really, really help. But if you don't have that, make sure that you have a different person who is the incident commander as opposed to the operations lead. That way, if the operations lead goes onto our rabbit hole, the incident commander can pull them back and say, "Hey, it's been 20 minutes since I last checked in on you. Have you made any progress? Can you explain to me what your theory is again? That rubber duck to force you to explain your theory is super, super important. Whenever you do an intervention, for instance, blackholing the user's traffic, you should expect the system to recover in 5 to 10 minutes. If it does not, then you should turn that user back on.

Jonan: And so the incident commander would take the responsibility for checking in. And this is why, as you pointed out, it's so important to have a distinction there. You have an incident commander who's in charge of walking the circle and making sure that everyone is aware of the plan, and taking the correct course of action. Then you have an operations lead who is actually going to go and dig in and find the issue in this case. What other roles do you see commonly played in an incident response?

Liz: One of the ones that I wish people did more often, at least for when it returns to in-person debugging world, is a scribe, having someone write down what happened, when, when did we decide to do something? Documenting our plan, that scribe role is really, really important. And some of it can be done automatically by scraping your Slack logs, but not all of it can, and I think that piece of figuring out what is lost when someone hops onto a Zoom to debug in real-time. So I think that's one important role. I think another important role is the communications lead, to make sure that someone is handling communicating to the customer success team or doing the communication to customers themselves to make sure that people are kept in the loop appropriately so that that way you don't wind up bugging the incident commander or bugging the operations lead for hey, what's going on? Hey, what's going on? Hey, what's going on? To make sure to separate that broadcasting rollout.

Jonan: And this is the town crier, someone who's just doing the external communications from the team. So the scribe is keeping track of everything that's happening. The operations lead is doing the thing, the operation. The incident commander is looking over all of it, getting that 10,000-foot view, and the communication is external.

Liz: 10,000-foot view to be able to come up with a plan and to react if the plan is not going as planned.

Jonan: Yeah, this makes sense. We had a guest on this show named Bobby Tables who has this company called FireHydrant, and they do a ChatOpsy kind of thing that drives this and creates some of that documentation. How much do you think that that role can be automated? Is there something to be said for always having a human element there?

Liz: I think that you still have to flag what's important whether you're using FireHydrant or Blameless; it’s important to starve the things are actually important because not every line of chat in that chat room log is going to be helpful. Some of the turning points when did we fork this thread of investigation or when did we deploy this mitigation? You have to tag those things. It could be that it's not a dedicated role, but in that case, the incident commander has to be extra on top of marking for the bot, which things were pivotal turning points.

Jonan: Interesting. And the last piece I wanted to talk about was this escape hatches thing because I feel like it goes beyond the that's probably not going to work out let's move on to something else but building processes into your system that almost force progress beyond that, beyond the incident commander coming and checking in, forcing the progression towards a resolution or a remediation. There are other processes that people sometimes employ. What other advice would you have for a team trying to just make sure that things proceed in an orderly cadence, not necessarily the people aren't chasing a red herring but any of the other pieces that could go wrong in an entire incidence response?

Liz: The number one tip that I have is don't panic. Get lots of water. Get rest if you've been sitting in front of the computer for long enough. Don't let your nerves get the best of you. I think that that human factor is super, super important. Yes, those of us who develop observability tools we can make people's lives more ergonomic. But at the end of the day, it’s a person doing analysis, and that person's brain starts to develop bugs once they're sufficiently stressed or tired or dehydrated. So make sure you go for a walk around your yard wearing a mask if you have to.

Jonan: There most certainly has been plenty of science behind this thing, what the sustained attention span of an adult is when you're intensely focused on a problem like that, what that junction is. When is the time to swap out a human and say, "You are forced now to go for a walk," because there's someone who's done this research, for example with air traffic controllers or people in other critical roles like that where it's like, "No, you're taking a break now."

Liz: There is no reason to assume that software engineers are any better at this than air traffic controllers. We follow the same rules of cognition that air traffic controllers do. So I think this is what is really great about John Allspaw's research is that it combines the safety engineering fields from air traffic control analysis into software engineering rather than saying oh, software engineering is its own special discipline or when you reinvent everything.

Jonan: You mentioned this John Allspaw fellow. Tell us who that is.

Liz: So John Allspaw was one of the people who helped coin and popularize the term DevOps. He was giving a talk at the Velocity Conference about 15 years ago in which he hypothesized that hey, there's this thing called web operations, and we really ought to treat it as a specialized field. He was a very senior engineer at Etsy. He went on to instead of go and found another startup, what he chose to do was to start an academic research and consulting business to really focus on human factors research to work with people like Richard Cook, who was instrumental in safety analysis in other industries to come together and look at software system failures as system failures rather than as software system failures.

Jonan: That systems thinking, do you think that it was a part of what we did before that revolution? Before we started talking about this as DevOps, do you think that there were teams? I'm certain that there were. It wasn't as widespread. Maybe we didn't have a name for it. But what is your perception of life before that, before we called it DevOps?

Liz: In the old days, this was for me 2004, in 2004, I was a systems engineer at a massively multiplayer online game. It was one of the first massively multiplayer online games. And yes, we had our developers on call. There wasn't the name DevOps, but our developers were on call. But at the same time, if everything went south, you would call Lendin, you would call me. You would call the Ops expert. So there was still this idea that there were some people who are going to be better on Ops than other people. So I think that we weren't necessarily doing a lot of those human factors during a retrospective. We weren't necessarily doing that, but we were, in fact, avoiding assigning blame. We were, in fact, putting developers on call. So I think a lot of these are practices that slowly got formalized over the past decade and a half in terms of what is it we ought to do and things that have become mundane such as is it DevOps? Is it SREs? It's probably a little bit of both.

Jonan: This is not an important point but how do you feel about DevOps as a job title as opposed to SRE?

Liz: I think that DevOps is the name of the broader cultural movement. It might be okay to call someone a DevOps coach if their job is to help teams embrace DevOps. But to call your Ops engineer the DevOps engineer, please don't do that. I think in the same vein, though, SRE is an overloaded word. SRE can refer to cyber liability engineering, the discipline; it can refer to site reliability engineering, the department if you happen to practice SRE by having a dedicated management team for it, or Site Reliability Engineer can refer to an individual engineer whose responsibility and specialty is thinking about production systems regardless of who they're reporting to. So we mix up what's the methodology, what's the organization, and what is the job title.

Jonan: I would like to see less of that, but that is a war that I have historically never won. [chuckles] And anytime that I try and steer job titles even just within the company where I work towards particulars, there's a lot that goes into that.

Liz: I'm less fussed about whether you call yourself an SRE or a DevOps engineer. What I care about is that you not just rename people's business cards and not actually change what they do. If you change what you do, the job title will just come naturally out of that. It's more trying to force things through the job title; it never works.

Jonan: I have participated in scrums, for example, so I'm a scrum engineer; no, this is a terrible idea. Please don't do that. You mentioned briefly there if you happen to have a dedicated management structure for a dedicated management hierarchy for SRE, good idea, bad idea? Should the SRE be a whole team, a whole org, or should it be built into the engineering teams?

Liz: This is certainly back to the beginning of our conversation when I talked about ‘it depends’ as the answer any consultant gives. I have to listen for a good 5 or 10 minutes to figure out is having an SRE or a Chat the right thing for an organization? If someone asks me about that. If you need to push back on a product management or engineering org that is not necessarily focusing as much on reliability and is focusing more on feature development, yes, having a set of human beings whose job it is to stand up for reliability and who have the mandate to do that, great. But a lot of the time you don't need that. All you need is a community of practice for your SREs to make sure that regardless of what their job title is and where in the company they're located that there is someplace for people to share ideas about how to operate better systems. That community of practice is the crucial thing that you need regardless of whether or not you have a dedicated job ladder organization management chain, and so forth.

Jonan: So if you had other advice beyond what we have already covered, and there are a lot of good pieces of advice in here for people to pick out, but maybe just kind of an overarching goal for people or some general steering for someone who's trying to understand this whole space of observability because you're right in that it is very deep, and complex, and nuanced. There's a lot there. The ‘it depends’ answer is, of course, relevant in this case as well. But if you could only tell someone one thing and it depends is not an option because you don't know their organization, what would you tell them?

Liz: Look at your code and foster a sense of curiosity about it. Everything else follows from that. If you have a sense of curiosity about your code, if you're looking at how it behaves in production, inevitably, that will drag you in the direction of asking questions that you don't have the capability to answer and then looking for ways to answer those questions. And then eventually, you will get towards a state of better observability that way. But if you don't even start looking, if you're like la-la-la, my eyes are shut, I can't see the problems in my systems, then you're not going to achieve observability.

Jonan: I like it. Very smart advice. Thank you so much for coming on the show, Liz. Where can people find you on the internet?

Liz: People can find me at www.lizthegrey.com or twitter.com/lizthegrey, that is with an ‘E’ between the ‘R’ and ‘Y.’

Jonan: Is that the British or the American spelling?

Liz: It is the American spelling.

Jonan: With an ‘E’.

Liz: Yes.

Jonan: And the ‘A’ would be British.

Liz: Yes.

Jonan: See, this is the one word where I get mixed up repeatedly because normally you can just insert some extra vowels and make it the British spelling, but in this case, we just swap one. The American grey: lizthegrey on most places on the internet. Presumably, your GitHub is listed lizthegrey as well.

Liz: That is correct as well.

Jonan: Excellent. Well, again, thank you so much. It was a pleasure.

Liz: It was a pleasure. Thank you very much for having me.

Jonan: Thank you so much for joining us. We really appreciate it. You can find the show notes for this episode along with all of the rest of The Relicans podcasts on therelicans.com. In fact, most anything The Relicans get up to online will be on that site. Right now, we're running a hackathon in partnership with dev.to called Hack the Planet, where we're giving away $20,000 in cash prizes along with many other fabulous gifts simply for participating. We would love to have you join us. You'll also find news there shortly of FutureStack, our upcoming conference here at New Relic. The call for papers for FutureStack is still open until February 19th. I encourage you to stop by and submit a proposal. We would love to have you join us. We'll see you next week.

Discussion (0)

Forem Open with the Forem app