The Relicans

loading...
Cover image for Nuances, Opinions, and Phenomenons with Postmodern

Nuances, Opinions, and Phenomenons with Postmodern

mandymoore profile image Mandy Moore ・28 min read

As a beacon in the Ruby community, Postmodern talks to Relicans host Kirk Haines about being in said community circa 2008, and the myriad of tools, libraries, and projects he’s created over the years.

Should you find a burning need to share your thoughts or rants about the show, please spray them at devrel@newrelic.com. While you’re going to all the trouble of shipping us some bytes, please consider taking a moment to let us know what you’d like to hear on the show in the future. Despite the all-caps flaming you will receive in response, please know that we are sincerely interested in your feedback; we aim to appease. Follow us on the Twitters: @PolyglotShow.

play pause Polyglot

Jonan Scheffler: Hello and welcome to Polyglot, proudly brought to you by New Relic's developer relations team, The Relicans. Polyglot is about software design. It's about looking beyond languages to the patterns and methods that we as developers use to do our best work. You can join us every week to hear from developers who have stories to share about what has worked for them and may have some opinions about how best to write quality software. We may not always agree, but we are certainly going to have fun, and we will always do our best to level up together. You can find the show notes for this episode and all of The Relicans podcasts on developer.newrelic.com/podcasts. Thank you so much for joining us. Enjoy the show.

Kirk Haines: Hi, and welcome to the Polyglot podcast. I’m Kirk Haines, your host. You can find me on Twitter, GitHub, pretty much any other social media at wyhaines. And today, I'd like to welcome my guest, Postmodern. Postmodern, would you mind giving us an introduction to yourself?

Postmodern: Hello. I am Postmodern. You pretty much recognize me from GitHub and from Twitter. I’ve been in the Ruby community for a while, I think since 2008. And during that time, I have produced a couple of useful tools and libraries like Spidr, bundler-audit, and chruby, and ruby-install. And I also have many other projects that probably you haven't heard of.

Kirk: It’s since 2008. I was trying to think back, trying to guess when it was. I actually had in my notes “2010?” Because I couldn't quite remember. I knew it'd been quite a while. Let’s start with, I guess, some history. So we know each other primarily from the Ruby community. But what brought you to Ruby originally? What was your starting point in your voyage, and how did you end up getting into the Ruby community?

Postmodern: Well, I was working on some project of mine. It was actually in C, of all things. But it was this weird project that I was working on, on the side during college. And I wanted to make this weird template language for building strings or whatever. And it was quickly becoming like, well, this would eventually become a scripting language. And one of my friends in university was like, “Well, you might as well check out this new fancy language called Ruby.” And I'm like, “Okay, I'll look at the documentation.” And I noticed that a lot of the examples used blocks, and I thought that was really cool that you can have this transactional style of programming where it handles the setup and cleanup for you. And you can also catch exceptions and stuff like that or do ensure to handle any exception but then parse it back up the stack. That was really refreshing because if you ever do any hardcore C programming, you will have to do lots of these; I don’t know what you call them, like setups where you have to then do all this initialization code. And then, at each step, you have to then check if an error occurred. And if an error occurred, you have to go then to the bottom where you have a fall-through of go-tos that then sequentially kind of backwardly deallocates and closes everything out. So it can nicely and gracefully clean up after itself if an error occurs and all files get properly closed and stuff like that.

So that was really cool. I was blown away by that. And I eventually slowly got more into it as time went on. And then, I started using it for various projects, side projects, and things. And this was around the time when RubyForge was still around, and we were using Hpricot to parse HTML. And so that got me into the Ruby basics of writing tools for scraping HTML and stuff like that. And slowly, eventually, I got more into it as my needs advanced where Hpricot wasn't good enough. And then eventually, I think Nokogiri came around and started building gems for the various things or using more of secure gems to meet my needs or whatever. And so I eventually, incrementally got more into it. And I think I also did another project for university, which was my first real sit down and plan something out and build it in Ruby, and that was a Raingrams, which was the ngram’s processing library. Looking back on it, it's not the greatest. I haven't touched it in forever. But it was a good example of actually take an algorithm, take a paper, explain how an algorithm works, and then constructing all of the classes and all the logic and modeling that in Ruby to get the results and then write tests, of course. And it was really cool to see it actually work finally.

Kirk: That's cool. I went to your GitHub page while you were talking there, and Raingrams it looks like you updated GitHub Actions on it or something a week and a half ago. So it's high on your list of repositories right now, which is interesting. Other than Ruby, I know you've been doing some Crystal because I've bumped into you in the Crystal community. And according to GitHub statistics, you do a little bit of C. Is that your primary ecosystem that you work in right now?

Postmodern: So I think that might be a little misleading about the C. I’m pretty sure what's throwing that off is Digest-CRC. So Digest-CRC was a library I wrote because at the time, I was looking around, and there was no complete CRC algorithm suite for Ruby. And that's one of the moments I have where I'm like, come on. I know that because of Rails, Ruby's is pretty web-heavy. But there are certain things you should expect in a language like CRC checksum algorithms. And apparently, there are tons of them, tons of variations, not just CRC-32; it's just one of many. So I went about building that and, based off of PyCRC, which they're nice enough to include generator code, started out writing in pure Ruby. Because I’m like, oh, just writing in pure Ruby will let the interpreter and the VM do all of its optimizations that way. If people want a faster version, they can switch to JRuby or whatever is the latest, fanciest VM alternative to the MRI Implementation.

But then eventually, that library took off, and apparently, Zendesk's Kafka clients used it. And then I got a bug report one day explaining that when they're testing, just sending something silly like a zero-length string or a single character string just hammering their queue with this, that they found that most of the time was spent calculating CRC-32 csums, again, weird variants of CRC-32. It was basically just doing bit math in Ruby. And so I had to optimize that, and eventually, I took it upon myself to then write C extensions for all of the implementations, which usually involved going back and generating the C code from PyCRC and then cleaning it up because I used to write lots of C code. And so there's a very defensive and very specific way that you write C code, and that's just the way you write it. It's because C is a very finicky and fragile language. It has a type system, but it's memory-mapped. And there's also implicit type conversion or typecasting, which can bite you really bad, especially when unsigned integers accidentally get converted to signed integers and vice-versa.

Kirk: [laughs] Yeah.

Postmodern: And there are other languages like Go, Rust, Crystal, and M; there are no assertions. It just happens magically in the background, which is why you got to pay attention to those compiler errors or compiler warnings. The C compiler is going to show errors on those things with just oh, by the way, you might want to look into that. And so finally, I got it to where I had comparable C implementations for everything. Someone wanted to just write a C extension just for that one in order to scratch their itch and fix the problem that the Kafka library was having. But I wanted a nice consistency across the board. And so that was basically lots of copy and pasting and lots of editing.

And then I had to do something actually pretty clever because I wanted to have the library gracefully fall back if it couldn't build the C extensions when installing. For instance, you might not have a compiler installed on your system, maybe it's a new MacBook Pro or whatever, and you haven't set up X code or whatever. You go to install C extensions, and it’s like, hey, no compiler, whoops. That's a bump in the road, I guess. So I wanted to optimize that out where if C extensions couldn't be built for whatever reason, it would fall back to the pure Ruby implementation that could be built. It would load and then override the certain Ruby methods that were the hot code that did all the bit math. But anyway, that's where a lot of the C comes from.

I wrote a couple of other small libraries in the past but mainly Bash, Bash, and POSIX shell script. That's one of the other languages that I keep falling back to because, of course, chRuby -- Ruby installs in Bash properly. And then chRuby has to be in POSIX shell because it loads into your shell, which could be Bash or ZSH. And so, I have to target the least common denominator between the two. And contrary to popular belief, ZSH did not evolve from Bash, Bash, of course, the Bourne Again Shell, which evolved from ASH. And ZSH actually evolved from KSH, the kornShell, not to be confused with Korn, the band. It's interesting. They have similarities, but they're basically different lineages. There are bizarre differences between them. For instance, ZSH starts arrays at index one, which is kind of weird; everyone else uses zero. And also the fact that Bash is really nice, I mean, the language isn't nice, but it's nice that it's installed by default in most systems. And so you gravitate towards it, and you don't have to compile anything. You don't have to ship binaries. It's just there. And so it lends itself very well to writing systems utilities.

I also wrote this weird other utility called ZCOW that involved me wanting to combine device-mapper, which on Linux it's this abstraction library and tool for handling devices and creating mirror devices. And so you can create things essentially software RAID, or you can create copy-on-write block devices. So you can send all the raids to one device. But the reads, if the block hasn't been populated on the right device, will fall through to the underlying device. And so that way, you can create these kinds of devices. It's difficult to explain if you've never messed with copy-on-write stuff. But people know if you ever boot up an Ubuntu live CD, all of the changes happen in memory. But its running is based off of a compressed static image, and that's done with copy-on-write. And so I built that with just Bash because it's basically just a bunch of commands, calling device-mapper, and then Z-RAM and doing some math to convert between different block sizes.

Kirk: I've written a fair amount of Bash in my life. It's always fun sometimes just to drop into Bash and do something in Bash, just because. But I've never -- Well, I use the ASH as my interactive shell mostly because there are some really nice add-on tools for just making the shell environment nice and user-friendly, and inviting, and pretty. I've never written any shell scripting that targets ZSH itself; it's always targeted Bash. So I did not know about that array indexing thing.

Postmodern: See, that's the funny thing is everyone loves ZSH for tab completions and stuff like that and like Oh My ZSH. But no one actually scripts in it, which is hilarious.

Kirk: [laughs]

Postmodern: And there are lots of weird things in it that once you start looking at it, you're like, oh, hey, that's kind of weird. Why did they do it that way? But getting back to also why Bash like C is such a weird language, it’s convenient because it's right there and has these built-in set of features that are enough to get most prompts solved, but it's not an object-oriented programming language. They have shoehorned so many features into existing syntax that every feature ends up having all of these caveats, and you don't really realize these caveats until you actually start looking into it. A good example is when people discover you can do set -e in Bash scripts, so when an error happens, when a command errors, then it causes the script to crash. However, that doesn't technically work if the command is in a pipe because errors are handled differently in pipes versus just single commands that are in the script individually. And then also, it doesn't quite work in Subshells or capture expressions where you're capturing the output of a command and treating it as a string. And there are all of these weird things.

And so if you ever actually want to write actual, real, I don't know what the buzz word is, but some reliable, stable, rugged Bash that you can definitely count on and not having weird, subtle bugs and side-effects, there's this website called Greg's Wiki or a Wooledge. And that has this Bash guy that has just completely, exhaustively explained everything you ever want to know about Bash. And it also has a dedicated FAQ page called Bash pitfalls. And they're incrementally numbered to 75 or something like that of every edge case in the language. It’s a super indispensable resource if you ever have to mess with Bash anything.

Kirk: That's really interesting. I guess the gnarliest thing that I've ever done with Bash myself is a couple of years ago; I was applying for a job where I was given a programming challenge to use any language I wanted to do this task that involved interacting with Twilio’s REST APIs in order to basically get a phone number, send a message to the phone number to validate that it's a real number, get the response back, et cetera. And the job itself was primarily a Ruby job. And so I thought, okay, I can do this in Ruby, but this would be really easy in Ruby. I'm going to do it in Bash, and I'm going to do it in Bash just because it won't be as easy. And it was a really entertaining voyage. I put together a command-line tool with a command-line argument parsing, and there's a full test suite for it and everything. It's one of these things that I spent way more time on that coding challenge than I should have. I would have been done with it in Ruby very quickly. And it's totally useless. And I ended up getting a job somewhere else. And so there was no point in ever doing it, but it was just so much fun because I wrote it all in Bash, and it worked great. It was a little weird, but it worked great. And I guess the thing that surprised me most when I went down that path is that there are things like test suites, unit testing frameworks for Bash, and libraries to build command-line option parsing for Bash that you think if you're going to write stuff like that, you do some language other than Bash. But lo and behold, all those utilities are right there in Bash. I thought it was fascinating.

Postmodern: So this is also a phenomenon that I've found interesting where a lot of times people will look at something, and they'll say, “I don't need to write tests for that. It's so easy.” And, of course, nothing always is so easy.

Kirk: [laughs]

Postmodern: There is always nuance. But then the second there's some testing framework or some way of instrumenting tests, then suddenly, people's opinion changes and they're like, “Ah, yeah, testing framework, testing is great. Love testing all the time.” And two examples of that are one, Linux kernel programming. For a long time, people were adamant that you don't need to test this. You have to actually do full integration smoke test booted up. There's no possible way that we could do any Gauntlt testing. And then some researchers at a university somewhere actually basically created Gauntlt testing for the Kernel where they run it– And then Bash that's the one where people were saying, “Oh, why do I need to write tests for Bash?” Bash is so simple. It's just a bunch of commands. What could possibly go wrong?” And it turns out a lot. Once testing frameworks started popping up -- and I believe the oldest is shUnit S-H unit or shUnit2; that's what you find it as on GitHub. And yeah, once they discovered those exist through the power of Google, then all of a sudden, everyone's on board with testing. I just found that so interesting that people can be so adamant, and then once exposed to the information, they're like, “Oh,” 180-degree change of opinion.

Kirk: Oh yeah. So the testing framework that I use is Bats because when I was Googling around, that happened to be what I found. But the one of subject tests is an interesting one because there are both a lot of different opinions about how tests should be done. And a lot of times, there's this reality where when you step back away from the software, and you're not actually writing the software, you have an attitude about how tests should be done. And sometimes, that attitude doesn't survive contact with the code. And so I'm just curious when you're coming into a new project, and you're starting to write code, how do you approach tests? Are you someone who does test-driven development where you implement your tests, then you implement the code to satisfy the contracts that are asserted in the tests? Or do you write the code, and then you come back, and you write tests to basically prove that the code works the way that you think that it works, or do you do something different?

Postmodern: Oh, well, I guess prepare to be offended, people, because everyone has super opinionated religious ideas on TDD.

Kirk: Exactly.

Postmodern: I guess closest to the latter. What I end up doing is one, I spend a lot of time thinking about the code and thinking about the structure of it or the design of the library and what pieces will talk to what other pieces. And then once I have enough ideas in my head of yeah, that would work, or that would talk to this, then I would go and actually start prototyping it. And also, I'd have to come up with a name for the library, and the name would have to inspire me and stuff like that. So once I've done all that work, then I'll actually start prototyping code. So once I have enough code where I can load it in IRB, I can start calling it, and then I validate that, oh, at least it sort of works. I’m getting back the results I'm expecting. Then I switched to writing tests where I end up then going back over some of the prototyping ground that I laid out and to assert all the edge cases where when I call this, this should work, and when I do that, that should work, and when that happens, an error should be raised and stuff like that. But then, once I have that going, I switch more into the traditional TDD or traditional style where one thing will lead to another. And it will start like, oh, well, that feature is going to need a complementary feature, so we’re going to have to create a module for that. And so the cycle repeats itself that way.

And sometimes, I’ll kind of stub out tests before I’ve even actually written any module code or new classes or modules. And that just helps me keep in mind the edge cases and of how I want things to be structured or how they should work. And sometimes, even if I write the code, but I don't feel like writing the tests because I'm just too exhausted, I'll just stub out the test names. I'll just write out the English description of what should do what, but I’ll leave the bodies empty, and I'll come back to them later. And then I won't even commit them. They'll just be uncommitted files hanging out there in the Git repository to remind me in the morning gee; I should get back to working on that. So it's not the traditional lockstep TDD process that everyone swears by. But I have a sneaking suspicion that a lot of people don't do true TDD, that this is just more of a rule of thumb and shouldn't be taken literally.

Kirk: That's why I asked that question because for myself, I do something very similar to what you do. Typically, I write some code; I run the code, I use IRB to test the code. I get things to come together so that at least along the happy path, it looks like it probably works. And then once I get to that point, then I come back, and I start putting in tests to both validate the happy path and validate all of the other edge cases. And that's usually when I discover, okay, here are all the things that I actually didn't consider that my tests are now finding. But I almost never write the tests first in part because until I'm actually writing the code, and playing with the code, and running the code, and breaking the code, I don't always exactly know what it's going to look like. So I don't always know what I actually want to test for sure until some code has evolved. And then once the software is more mature, then exactly; you get into these cycles where okay, it's more mature, and we're thinking okay, I might want to add this feature so I can go stub out some tests that at least give a framework for where I'm going. And then, we iterate back and forth between writing code and filling in tests. And I like to ask this question to people because I've not talked to very many people that really adhere to the classic test-driven development mindset. Almost everybody does some hybrid approach, but very few people do it exactly the same way, and so it's just interesting to me.

Postmodern: So not to get too philosophical here, but I think there's a difference between the context of a lot of these methodologies like agile and TDD that they're usually based in a more professional enterprise environment where people are having meetings about how certain modules should be designed and creating design documents. And so that's a lot more rigid and structured. And so I guess TDD would lend itself more to that style of development because you have more of that, I guess you would call it structure, the engineering artifacts that are laying out the rubric of what you're going to actually end up implementing. Whereas greenfield projects where you just had this idea where you just have a random itch that you want to scratch that's more prototype land, that's more R&D type things where you don't really know the outcome of your idea. And basically, all of your ideas are thought experiments where we have to then go and then test. You have to basically implement them to know if they're going to work out or if they're going to be sound because you're probably not storyboarding this stuff.

Kirk: [laughs]

Postmodern: You're not writing formal proofs in some formal verification language. It's basically just you coming up with ideas and sticking them together and hoping that it all fits together and it's all sound. And then you don't really know until you actually create a working prototype first. And then that's when you know I got something here. Let’s now make it formal. And another boring topic we could then bring up is documentation. Because usually, I start writing the test either before or after I start writing documentation just as a way to go through all the edge cases of all the arguments and all of the different data that it can yield, and documenting what exceptions it can raise as a way to help me remember all the edge cases and get all of that documented fresh in my memory.

Kirk: Yeah, I am similar in that regard. Usually, for me, the documentation usually there's a first pass at the documentation that comes around after I'm partially through the tests. But documentation for me seems to be something that is always constantly evolving. You iterate on it, and you document, and then sometime later I go back, and I look at the documentation, and I realize, oh hey, that was a dumb way to say that, that left out these details. And so then it evolves a little bit more. And documentation is hard.

Postmodern: What I found that helps with documentation is to get away from thinking about how should I write this in English? One of the really painful things with documentation or documentation formats that don't support tags, and this is specifically about API documentation, is when you don't have the ability to annotate things with tags, you end up falling back on using markup and like, oh, I need to italicize all the variable names. And then this leads to where you're constantly worrying about the formatting and not what you're seeing. And so what I've found works better is tag-based documentation software similar to Doxygen and Javadoc, and then Ruby; I use YARD a lot. I'm still waiting on Crystal for Crodoc. Someone's working on, essentially, a YARD equivalent in Crystal. Because even though Crystal is a strongly-typed language, and we can get into Crystal, so when you define functions, you basically have to specify types, or they're inferred for you. And so when you generate documentation, you have all of that typed information already there, but you don't have the ability to then individually annotate all the different edge cases where you can say, “Oh, well, if this argument is omitted, then it means this.” And so you don't have the ability to get into the fine detail of what does it mean? Sure, it says that it yields a block or, you know, I don’t know what the correct English for that is. It accepts a block, and then the block will be passed these kinds of arguments. But beyond the types of those arguments, there's no real way to individually tick off each one and say, “Well, this argument actually means this thing in terms of the API.” And maybe you can imply that based on the type name, but a lot of times when you're yielding strings and basic primitive data like that, it needs more contextual information of this variable is actually this more important thing that relates to this other data over here, and it's the index that points into this buffer, something like that. When you are doing tag-based annotation, you just get into the cycle of basically just going down the list and tagging everything and just filling in some blurb.

The other thing I've come to finally concede about documentations is that depending on your communities, who is this tool targeting? What's the user base? So a lot of developer communities we're going to be very detail-oriented. So if you put the documentation, or if you put some examples in a README or put it in the API documentation, chances are people are going to see that. However, if your community is more academic programmers or they're going to be more research people, I have found that you basically have to repeat yourself as a way of increasing the chances of them noticing it. And also, I think that this also seems to be true where you have people who are really rushed and hurried where maybe they have lots of deadlines or whatever. And it's not that they're scatterbrained or they're just careless. It’s they have externalities there pressing on them. And so the more you repeat yourself in the documentation, if you have the same examples in the README as in the actual API docs, there is more of a chance that they're going to put something in Google, and Google is going to find it and then bring it to the documentation. And they're going to see it and like, “Oh, that's how you do that.” And so that's just one of those things that you're trying to hedge your bets pretty much. Because the more you do that, then the less support tickets you get where people are like, “How do you do this?” And then you have to link them to some obscure area of the API documentation, or you could just repeat it a bunch, you know?

Kirk: Yeah. That's a really good insight. I want to actually circle back to Crystal just a little bit for a few minutes. So, how did you find Crystal? How did you decide to take a look at Crystal?

Postmodern: So I always try to keep an eye on emerging technologies, and so my skill set doesn't atrophy. But also, at the same time, I'm very skeptical of up-and-coming and over-hyped technologies. Like, I'm still not sold on Kubernetes and definitely think that there's going to be another technology that comes along and basically just wipes it off the map because there seems to be a lot of grumbling and pushback against it. And so that seems like an opportunity for someone to move into that space. With Crystal, I was in the Ruby community, and I saw a link for it pop up one day years ago. And at that time, it seemed like a toy language. It had a very nice website, but their example was basically defining a class called points that had X and Y properties. And so it was very, very basic, and I didn't even try installing it at the time. I was like, oh, that's interesting. They're building this language that basically is Ruby syntax and compiles down to native code. And I didn't look at the internals yet, but I knew that compiler design is a really hard problem. Even if you use LLVM as a backend to handle the actual generation of the native code, you still have to do all of the parsing and the semantics and all of that stuff.

So it popped up on my radar, and I was like, oh, that's interesting. I'll keep an eye on it later. And eventually, then I came back around to it and saw it pop up again, and this was also around the time when there's lots of evangelizing about Go and then Rust. And I noticed that there's this marketing battle between the factions where people are trying to secure as much of the mindshare of modern compiled languages, not like C and C++, which are the big whole king legacy, I guess you could call them kingdoms at this point. But they're entrenched. There's also then the competition where these languages now are competing in marketing in the space because they need users. They need to capture these users. And they start to compete with interpreted dynamic programming languages like Ruby and Python. So I was looking at the space, and I really wanted to -- originally, I was looking at Rust because Go had been really heavily marketed, and Rust had some interesting features being that it was very kind of a function. It had elements of Erlang in it. I thought that was pretty cool. But once you actually start to use it, you get the sense that you're contorting yourself more to the language rules than the language is making exceptions for you. And the compiler is like, oh, I see what you are saying, what you're doing there in the code. Let me just change that to something more correct where it will be like, no, you are doing something wrong. You need to fix your code. You are a bad user. And in their defense, they have done a lot of improvements to the compiler. But things like the borrow checker are still very, in my humble opinion, are somewhat user-unfriendly. But borrow checking is a new concept and people have to wrap their head around it.

And the other thing was Go which was being heavily pushed. And I knew some people at a former job who are very big Go evangelists. But once you actually learn the language, it's like, this is basically like C except with bumpers, and water noodles, and safety pads, and garbage collection. So it wasn't very impressive. I was like; this seems like a step-down. You're missing all of the object orientation and all of those fancy features. And they'd usually say something like, “Oh, well, you don't need those features, or “They're somehow intrinsically bad. They'll confuse you, and you'll make mistakes.” But there's no empirical evidence for that. And the fact that basically, these features are shortcuts, and having those shortcuts are things that are nice to have because they allow you to write very complex code without having to spin your wheels. But yes, technically, you don't really need them. You could write everything in Bash like we were just talking about and not have to define classes or data structures or anything like that. Yes, you could, but it would be painful, and it would take a lot longer. And thus, we develop these shortcuts to minimize the pain and cut down on development time because getting your product to market before the other guy means that you win.

And so then basically, I finally took a look at Crystal again because I knew I’d seen it in the past. And the thing that really impressed me, and also I think about this other language called Nim, which is also gaining popularity, is that because the syntax and the semantics of Crystal are basically the same as Ruby, then in theory, you don't really have to spend that much time and energy or money to retrain Rubyists to write Crystal because they're basically the same language, except that Crystal introduces some concepts such as strong types where you don't have implicit type coercion, which is a good thing because that does lead to problems and some other quirks like you have structures, you have structs, and then classes and they behave differently. Structs get passed by the value between functions. And classes are allocated on the heap and passed by reference. And that's a thing that a lot of Rubyists might not be familiar with. That's something that's taught in object-oriented programming languages 101. If you have to learn C++ for university, they introduce this whole concept pass by value, pass by reference.

And so that's when it really dawned on me. I was like, wow, this thing is basically a secret weapon here because you have all these people saying, “Oh, Ruby doesn't scale. We need something that compiles down. And the compiler will save us from all of our errors, and we can't trust this finicky interpreted thing.” And now you have this thing that's just right next door, this programming language that has all those demands or has all those features that people are lobbying for. And it's like, why don't we just use it? And then I did a few example projects just to test the waters and found it was very, very, usable. And like the shards system, the shard system is basically a packaging and build system in one where you just edit a simple YAML file. You don't have to create all this crazy structured YAML. It's not like some weird DevOps nightmare. You just have to develop basic information dependencies, and it'll build your stuff. And I was able to write some decently complex code, import some existing Ruby code over fairly easily. There are similarities between the ecosystem, so in Ruby, I like to use RSpec, and in Crystal, there's a library called Spectator, which is basically RSpec for Crystal. And being able to convert between the two is very simple. Syntaxes are very similar. There are just some minor differences here and there. They don't implement all the RSpec feature set, which is fine. They'll get there. And I was just blown away. I was like, wow, this was not only easy and not only fast, but it was enjoyable. And basically, I have all the benefits of all those other languages without all the suffering and having to learn a completely new language and then contort myself to whatever bizarre rules the compiler imposes upon me.

Kirk: And that was pretty much my same experience, basically my same experience with Crystal. It's that same thing. I discovered that five, six years ago but didn't think that there was anything I could really do with it then. And yeah, I came back to it last year and was just blown away by how Ruby it was as far as ease of the use, the familiarity, and that sort of thing. But the typing and the raw performance of the compiled code just blew me away, and I fell in love with it. I think it's a language that hopefully has a future ahead of it.

Postmodern: Yeah, totally. I also– I’m rooting for, also not to be unfair to the Python people, I am rooting also for the Nim language, which is very similar to Crystal. It has many of those same features but has Python syntax. And I have no idea why this language isn't more popular given the ubiquity of Python. And it's like, hello, it's right there, man. Use it. And it is starting to gain popularity. I'm seeing more projects from the InfoSec space because they're very Python-dominated. And they also do lots of native code stuff and things where you need low-level access to memory. So gradually, I’m starting to see more research projects pop up in Nim and doing cool binary memory level manipulation and stuff in Nim for vulnerability research, and exploit development, and offensive tooling, and stuff like that. So it is getting there. But yeah, hopefully, those two languages will prosper.

Kirk: Yeah. I haven't played with Nim very much yet. I know of its existence, but my Python skills are pretty rudimentary. And so I’ve just stayed away from Nim at this point. But it's an exciting future, I think, for our languages. It's been a lot of fun. I'm actually really, really, happy we were able to get this to come together and chat for a while. It's been good. I want to thank you for taking the time to come on the show with me today. Do you want to tell people once again where they can find you online just in case anybody wants to look you up and what you've done?

Postmodern: Yeah. So you can find me on GitHub at Postmodern, github.com/postmodern. And on Twitter @postmodern_mod3, and that's because someone else registered the name Postmodern before me, and then they got took over by some malware spyware thing and started posting links to malicious websites, and ever since then, their account has been silenced. So I’m probably not going to be able to buy that account from him. But those are the two main sites. I’m also in Discord and IRC, and we're on Elements as also Postmodern because Crystal has an Element chat room formerly known as Riot, riot.app, or whatever. But yeah, that's usually the places where you can find me and get into contact with me.

Kirk: Fantastic. Thank you again so much. And for everybody else out there, thanks for listening to Polyglot. You can find us on Twitter @PolyglotShow. And this is Kirk Haines for The Relicans. You can find The Relicans on therelicans.com. It's a community for software developers to come together and talk about and share whatever they're doing. Thank you very much for listening, and we'll catch you next week.

Jonan: Thank you so much for joining us. We really appreciate it. You can find the show notes for this episode along with all of the rest of The Relicans podcasts on therelicans.com. In fact, most anything The Relicans get up to online will be on that site. You'll also find news there of FutureStack, our upcoming conference here at New Relic. We would love to have you join us. We'll see you next week. Take care.

Discussion (0)

pic
Editor guide