RubyConf 2015

Deja de odiar tu suite de tests Ruby

Justin Searls  · 

Transcripción

Extracto de la transcripción automática del vídeo realizada por YouTube.

- Alright, high energy, I love it, alright. Doors are closing, and now we're covered. Alright great, can we get my slides up on the monitors? Alright, great. Let me start my timer. Where's my phone, uh oh? Who has my phone? Where's my timer? Alright, we'll start, alright.

So, there's this funny thing where every year conference season lines up with Apple's operating system release schedule, and I'm a big Apple fanboy, and so I like, on one hand really want to upgrade, and on the other hand, really want my slide (mumbles) to work.

So, this year, it was because they announced the iPad Pro, I was pretty excited, I was like you know, "Maybe this year, finally, "like, OS 9 is gonna be ready for me to give, "like a real talk out of, "and build my entire talk out of it. " So, this talk was built entirely in OS 9, so let's just start it up, see how it goes, I'm a little bit nervous.

Alright, so it's a little retro, it takes awhile to start up. I built my entire presentation in AppleWorks. So, I gotta open up my AppleWorks presentation, okay there it is, I gotta find the play button. And here we go, and good, alright. So, this talk is "How to Stop Hating Your Tests.

" My name is Justin, I play a guy named Searls on the internet, and I work at the best software agency in the world, Test Double. So, why do people hate their tests? Well, I think a lot of teams start off in experimentation mode, like they, everything's fun, free, they're pivoting all the time, and having a big test suite would really just slow down their rate of change and discovery, but eventually we got to a point where we're worried that like if we create a new change, we might break things, and so important things stay working.

So, people start writing some test suites. So, they have a build, so when they push new code, they know whether they just broke stuff. But if we write our tests in a haphazard, unorganized way, they tend to be slow, convoluted, and every time we want to change a thing, we spend all day just updating tests, and eventually teams get to this point where they just, you know, yearn for the good ol' days where they got to, like, change stuff and move quickly.

And I see this pattern repeat so much that I'm starting to believe that an ounce of prevention is worth a pound of cure in this instance. Cause once you get to the end, there's not much you can do, you can say like, "Hey our test approach isn't working," and a lot of people would be like, "Well, I guess we're just not testin' hard enough.

" And when you see a problem over and over again, I really personally, I don't believe that the work harder comrade approach is appropriate. You should be always inspecting your workflow, and your tools, and trying to make them better if you keep running into the same issue.

Some other people, they might say, "Okay, well let's just buckle down, remediate. "Testing is job one, let's really "focus on testing for awhile. " But from the perspective of the people who pay us to build stuff, testing is not job one, it's at best job two, from their perspective, they want to see us shippin' stuff.

Shipping new features, and as the longer we go with that impedance missmatch, the more friction and tension we're gonna have. So, that's not sustainable. I said we're talkin' about prevention, but if you're, like, working in a big legacy, monolithic application, you know, and you're not greenfield, this is not a problem at all, cause I got this cool thing to show you.

There's this one weird trick to starting fresh with your test suite, that's right, you're gonna learn what the one weird trick is. Basically, just move your test into a new directory, and then you make another directory, and then you have two directories, and you can write this thing called a shell script, get this, that runs both test suites.

And then, you know, eventually you port them over and you're able to decommission the old test suite. But I hesitated to even give a talk about testing because I am the worst kind of expert. I have too much experience, navel gazing about testing, building open sourcing tools around testing, I've been on many, many teams as the guy who cared just a little bit more about testing than everyone else, and lots of hifalutin', philosophical, and nuanced Twitter arguments that really are not pertinent to anyone's life.

So, my advice is toxic, I am overly cynical, I'm very risk averse, and if I told you what I really thought about testing, it would just discourage all of you. So, instead my goal here today is to still my advice down into just a few component parts. The first part, we're gonna talk about structure.

The physicality of our tests, like what the lines, and files look like on disc. We're gonna talk about isolation, cause I really believe that how we choose to isolate the code that we're testing, is the best way to communicate the concept, and the value that we hope to get out of a test.

And we're gonna talk about feedback, like do our tests make us happy or sad, are they fast, are they slow, do they make us more or less productive? And keep in mind, we're like thinking about this from the perspective of prevention, because these are all things that are much easier to do on day one, than to try to shoehorn in on day 100.

So, at this point, in keeping with the Apple theme, my brother dug up this Apple II copy of "Family Feud" and it turns out it's really hard to make custom artwork in AppleWorks 6, so I just ripped off the artwork from this "Family Feud" board. We're gonna use that to organize our slides, it's a working board, that means, like, if I like point at the screen and say, "Show me potato salad," I get an X.

But unfortunately I didn't have 100 people to survey, I just surveyed myself 100 times, so I know all the answers already. So, first round, we're gonna talk about test structure. And I'm gonna say, "Show me too big to fail. " People hate tests of big code, in fact, have you ever noticed the people who were really into testing in TDD.

They really seem to hate big objects, and big functions more than normal people, I mean we all understand big objects are harder to deal with than small objects, but one thing that I've learned over the years is that tests actually make big objects even harder to manage.

Which is counter intuitive, you'd expect the opposite. And I think part of the reason is that when you've got big objects they might have many dependencies, right, which means you have lots of test setup. They might have multiple side effects, in addition to whatever they return, which means you have lots of verifications, but what's most interesting is they have lots of logical branches, like depending on the arguments and the state, there's a lot of test cases that you have to write, and this is the one that I think is most significant.

So, let's take a look at some code. At this point I realized that OS 9 is not Unix. So, I found a new terminal, actually it's a cool new one, it just came out this week, so let's boot that up. Yep, here we go. Alright, we're almost there, it's a little slow.

Alright, so this is a fully operational terminal. Alright, so we're gonna type in like arbitrary Unix command, that works fine, I'm gonna start a new test. It's a validation method of a time sheet object to see whether or not people have notes entered, and so we're gonna say like, "If you have notes and you're an admin, "and it's an invoice week, or an off week, "and whether you've entered time or not, "all of those four boolean attributes, "they factor into whether or not "that record is considered valid.

" And at this point I'm writing, I wrote the first test, but I'm like, "I got a lot of other context to write. " Let's like, let's start planning those out, and I'm like, "Damn, this is a lot of test "that I would need to write to cover this case "of just four booleans.

" And what I fell victim to there is a thing called the rule of product. Which is a thing from the school of combinatorics and math. It's a real math thing because it has a Wikipedia page. And what is says essentially is that if you've got a method with four arguments, you need to take each of those arguments, and the number of possible values of each of them, multiply them together and that gives you the total number of potential combinations, or the total number of, upper bound, of like test cases you might need to write.

So, in this case, with all bullions, it's two to the fourth, so we have 16 test cases that we may have to write in this case. And if you're a team that's used to writing a lot of big objects, big functions, you're probably in the habit of thinking, "Oh, I have some new functionality, "I'll just add one little more argument.

"Like what more harm could that do, "other than double the number "of test cases that I have to write. " And so, as a result, as somebody who trains people on testing a lot, I'm not surprised at all to see like a lot of teams who are used to big objects want to get serous about testing, and then they're like, "Wow, this is really hard.

"I quit. " So, if you want to get serious about testing, and have a lot of test system code, I encourage you, stop the bleeding, don't keep adding onto your big objects. I try to limit new objects to one public method, and at most three dependencies. Which, to that particular audience is shocking.

The first thing that they all say is like, "But then we'll have too many small things. "How will we possibly deal with "all the well organized, and carefully named, "and comprehensible small things?" And you know, like, people get off on their own complexity, right, so they think that's what makes them a serious software developer is how hard their job is.

They're like, "That sounds like programming on easy mode. " And I'm like, "It is easy, it's actually like, "you know, not rocket science to build "an Enterprise CRUD application, "but you're making it that way. "Just write small stuff, it works. " So. . .

Next up I want to talk about how we hate when our tests go off-script. Code can do anything, our programs should be unique and creative, special unicorns of awesomeness, but tests can, and should, do only three things. They all follow the same script. Every test ever sets stuff up, invokes a thing, and then verifies behavior.

We're writing the same program over and over again, and has these three phases: arrange, act, and assert. A more English, natural way to say that would be given, when, then. And when I'm writing a test I always intentionally call out those three phases really clearly, and consistently.

[ ... ]

Nota: se han omitido las otras 5.225 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.