RubyConf 2015

Las novedades de JRuby 9000

Thomas Enebo, Charles Nutter  · 

Transcripción

Extracto de la transcripción automática del vídeo realizada por YouTube.

- So, I'm Tom. This is Charlie. We've been working on JRuby a long time now. But before we start, how many people have exposure to JRuby in some way? - All right most of you. Good, good. - Like 83. 5%. So, for those people who didn't raise their hand, I'm just gonna go over quick overview quickly.

So, JRuby is just another Ruby implementation. We try to be as compatible as we can with CRuby, and we actually support these three versions. Of course JRuby is built on top of the Java Platform, so we get all the benefits that Java has. We didn't have to write our own Garbage Collectors.

Hot Spot makes our code run very quickly, which you'll see in the next slide. The most important thing here for people to notice is that Java has Native Threads, and so does JRuby, so there's no global interpreter lock. - There's a couple good talks later today.

Jerry Antonio's gonna talk about, how the GIL isn't your savior and how you can do a lot more with real concurrent threads. That's at 1:15 in GM. And then Petr Chalupa is gonna talk about the concurrent Ruby library and building a good set of concurrency primitives and tools that work across all of the different Ruby implementations.

That's 4:20 in this room. So, if you're interested in concurrency at all, those are two great talks to check out. I had to include this as one of my favorite performance graphs for JRuby. So, we've got a benchmark of a red/black tree library. The top is CRuby MRI running a pure Ruby red/black implementation taking about two and a half seconds to run this benchmark.

The benchmark creates a bunch of nodes, traverses them, deletes them, and does that over and over again. And you can see why we often have to turn to C extensions on CRuby. So, the second bar down is Ruby with C extensions. Certainly gets a lot of performance improvement.

And this is, you know, now taking only about 0. 5 seconds. At the bottom, though, which is pretty cool, this is a very nicely, well-written, pure Ruby red/black library. JRuby's able to optimize it, the JVM can do a lot. JRuby running the pure Ruby red/black tree actually performs faster than CRuby with the C extension here.

And this is all because of the magic of the JVM, awesome garbage collectors, awesome optimizations. - There's also a lot of Java libraries out there. If there's a Ruby Gem that isn't cutting it for you, let's say you're doing something with Prawn, and you want to do something that Prawn can't do, you can just go over to the Java world and use iText.

- And compare this to, there's about 7,000 libraries on Ruby Gems, so 7,000 libraries versus 47,000 libraries that are in Maven. There's a lot of stuff out there. Just about anything you need, there's a JVM library for it. - 47's greater than seven. It's really easy to call into other languages.

Java's highlighted. Oh I think-- - Make it here? - Yeah, I think so. Java's highlighted, and it's very easy to call Java with the Ruby syntax, but you can call any language that's on the Java platform, like Clojure. - COBOL. - COBOL, yeah. So, here's the two supported branches we have.

On master it's JRuby 9000, which we're gonna be talking about, and then we still have a maintenance branch for JRuby 1_7. - We'll probably continue maintaining 1_7 for maybe another six months or so, six months to a year. - As long as people actually need 1.

9 support I think is gonna be the answer. JRuby 1_7 was a very interesting release for us because you can pick which compatibility level you want. You can either run in 1. 8 or 1. 9 mode with a flag. This ended up being a horrible idea for us because we have to maintain two run times in the same code base, and it just, it didn't work out so well.

So, for JRuby 9000 we're only gonna support the latest version of Ruby, and we're gonna track the latest version of CRuby. So it's 2. 2 right now. It will become 2. 3. - Right now that 2. 3 previews zero, previews one is out, we're gonna start putting the features in.

Hopefully within a month or two after MRI 2. 3 is out we'll have JRuby with 2. 3 support right away. - So, last Friday before getting on the plane, 9. 0. 4 came out and next week when we get back 1. 7. 23 will be out. We're very conference driven here. So, JRuby 9000, these are like the super high level bullet points.

We already said how we're tracking CRuby. We have a brand new runtime. We've been working on this runtime for years. Most of this talk will be about this new runtime. We're not bypassing Java for IO. It's mostly just Native calls. We can still fall back to Java, but this gives us better performance and more importantly, it allows us to do some compatibility stuff that we couldn't do using the pure Java solution.

- Probably the most POSIX-friendly JVM language at this point. - And Oniguruma's transcoding facilities have been completed ported and we have no more encoding bugs I promise. A few people might be wondering why we picked 9000 as a version number. - And it's solely because of "Dragon Ball.

" That's all. Now, it started, it started as a joke because we were going to go and say JRuby 2 and then that was about the same time that Ruby 2 was coming out and that would've been confusing as hell. So, we couldn't come up with a better number and it just stuck.

Charlie's even wearing the shirt today. - That's right. It's over 9000. So, the funny thing is 9000 started out as just kind of a code number for the release. But then we went back and looked and we had eight previous major releases, 1. 0 to 1. 7. So, it turned out that this is the ninth major release of JRuby.

So, 9. 0, 900 is our version number. - You can kind of say we're doing the Java numbering scheme. 'Cause they went from 1. 4 to Java 5. - Yeah. - So, now what? Well, that's the title of the talk and we do tons of compatibility work. We probably spend a lot more time in compatibility, but no one wants to hear about how we fixed compatibility bugs.

So, we're going to talk about performance. - Okay. So, there's a few recent things that we've done to improve performance for Ruby stuff. Stuff that we've wanted to do for years, but was really hard with the old runtime, made a lot easier by the new runtime work that we've got.

We're going to go over these quick. There. So, the first one up through JRuby 1. 7, when we would compile JIT code to JVM bytecode at runtime, we only did it on method boundaries. So, if a method got called 50 times or more then we would turn it into JVM bytecode and you'd get good performance out of it.

The problem is that there's a lot of code out there that just has free-standing procs or lambdas. So, if you have a table of procs that you're using for a bunch of calls, or if you're using define_method for example, those would never JIT, and so they'd stay in our interpreter and run slow.

Generally slower than MRI. So, that was something we needed to fix. So, in the, I think was actually 9. 0. 3. that the block Jitting came out. I've got 9. 0. 4 on this graph. So, here we show MRI. The blue bars here are the performance of a normal method, a regular defmethod.

The other bar is define_method which uses a block and has some block overhead. And you can see that JRuby 9. 0. 1. here, both cases were actually considerably slower than MRI because the benchmark had a bunch of blocks in it and those didn't JIT. So, not only were we not jitting the define_methods here, which are really slow, we didn't actually JIT the benchmark.

Now, that we can JIT on block boundaries and on method boundaries, the performance of both is much more where we'd want to see it. Definitely faster than MRI for regular method definitions, and a little bit faster for define_method. And that was the next thing that we wanted to tackle, was trying to get define_method to perform a lot better than it does.

It should be closer to regular method. So, here's an example of two different define_methods. The first one we would consider simple because it's a noncapturing define_method. It doesn't use any state from around the surrounding scope, just used for simple metaprogramming to create a method out of a block basically.

The second one here, we're iterating over some names, defining a method for each one. So, the name actually is getting captured. It uses a bit of state from the surrounding scope. Little bit more complicated. So, these are two, the two basic cases that we see for define_method.

We wanted to be able to optimize these better. Here's a comparison of the performance before optimizations. You can see that in CRuby and MRI, define_method methods perform about half as well as a regular method definition. And that's due to that extra closure overhead, extra state that needs to be managed and other various reasons.

In JRuby, we're only slightly better than CRuby because we had the same sort of overhead to deal with, with define_methods. So, a little bit better performance on define_method, but not, certainly not as close to a full on method definition. So, the strategy for optimizing these.

[ ... ]

Nota: se han omitido las otras 4.370 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.