Friday, 13 July 2012

Don't make your code "More Testable"

In some ways I'm very excited that the Ruby community seems to have taken up the "Mocking/OO flag" so to speak lately. Most of the Ruby conferences seem to have talks around "Object Oriented Rails", "Isolated Unit Tests", and things like the "SOLID Design Principles". And even more exciting for me is that I see more and more people referencing the Growing Object Oriented Software Guided By Tests book. I think these are all great steps for us towards producing more maintainable software over time.
That being said, I don't think we've quite understood all of the value of the techniques that we're all excited about. We've become focused on our making things "testable" rather than using our tests to learn about our design. So I want to try and take the opportunity to propose a slight course correction within the way that the community has been talking about these principles, and particularly the way that I perceive the rest of the community receiving them.

Example Conversation

I've heard the following conversation happen over and over in various forms lately:
"Excited OO Enginner" has been trying out some of these new "OO" ideas that have been such a buzz in the community on his latest Rails project. He has been creating POROs for all the services that his controllers might want to access, and he's been writing isolated unit tests for each of these components. He's worked hard on the project, and he's particularly proud of how fast all of his tests run. All of his previous Rails projects test suites took minutes, or sometimes even hours to run, but all of the tests for this project run in under a second.
Then "Experienced Rails Engineer" joins the team, excited to see how the entire company could learn from this project, particularly its fast test suite. However as he begins to look at the code, his gut tells him something is wrong. What would previously have been 10-15 lines of code seems to now take 20-30. Everywhere he looks, it seems like objects are creating objects, and he's not even sure why. To make matters worse, in order to understand the flow though the system he needs to read 5 different files rather than just one controller file. Unsure of why all this was necessary, he goes to talk to "Excited OO Engineer".
Experienced Rails Engineer: "Hey, I have a couple of questions about some of the code on the new project, you got a sec?"
Excited OO Engineer: "Sure, whats up?"
Experienced Rails Engineer: "Well, as I was reading through the code, it seemed like there was a lot of additional abstractions that didn't seem necessary to me, and I'd like to understand why we need them."
Excited OO Engineer: "I'm not sure what you mean. Where do you feel like we've over abstracted?"
Experienced Rails Engineer: "Like for example, lets take a look at this PostCommentClass:"
class PostComment
  def initialize(user, entry, attributes)
    @user = user
    @entry = entry
    @attributes = attributes

  def post
    @comment =
    @comment.entry = @entry!

    post_to_twitter  if @comment.share_on_twitter?
    post_to_facebook if @comment.share_on_facebook?



  def post_to_twitter, @comment).post

  def post_to_facebook, @comment).action(:comment)
Experienced Rails Engineer: "I could see the necessity of a PostComment object if we were going to need to share or reuse the code, but for now it seems like this could just be put into a controller directly. Why do we need the notion of a PostComment?"
Excited OO Engineer: "Oh, well you see we need that to be able to test the code in isolation from other objects. Doing it this way makes it much easier, and faster to test."
Experienced Rails Engineer: "Ok, well I can kind of see the benefit of that, but some of this seems really confusing to me. Like why does the PostComment take a user, an entry and attributes into its constructor?"
Excited OO Engineer: "That's because we want to be able to replace the user, entry and attributes in our tests with fake objects, so that we can control the flow through the system here."
Experienced Rails Engineer: "And the same would be true of the language detector taking a comment? It doesn't make sense to me that something connected to detecting language would need to know the structure of a comment."
Excited OO Engineer: "Well you see we want to assert that the comment is told to set itself to have the proper language, and so that actually is the responsibility of the language detector. And since we don't want to have to use a real comment within our tests, we need to pass it into the constructor."
The conversation goes on for 15 or 20 minutes. Every time the "Experienced Rails Engineer" asks a question about what he feels is a flaw in the design of the code "Excited OO Engineer" explains that it is necessary to do things that way in order to write fast, isolated tests.
After the discussion, "Experienced Rails Engineer" concludes that in order to write an application with fast tests he has to sacrifice his design, and write 2x as many lines of code. Being the principled architect that he is, he's not willing to make that compromise. So he resigns himself to having a slow test suite, so long as he can continue to write simple, easy to understand code.

So what's wrong here?

Although I'm quite happy that some of these discussions are actually happening inside the community, I think that the flavor is somewhat wrong. The goal of "fast tests" has become an end upon itself, and people are at risk of destroying their abstractions for the sake of "testability".
When we encounter a bit of code that is hard to test-assuming we're letting our tests guide the design of the production code we're going to write-we're going to have one of two responses. We're either going to say, "how can I write this in a testable way", or "what about the design of the system is this test complaining about".
The difference between the two is neuanced, and difficult to grasp when you first start to think this way. With the first approach my primary concern is solving the problem of something being difficult to test. With the second approach my goal is to reevaluate the design of my system in light of the flaw that my test is presenting to me. The end goal of the second approach is the improving of the design of the system, whereas the goal of the first approach is the improving of the design of the test suite.
Another way to think about this is to ask what is the connection between testability and good design. A good design isnecessarily easily testable, but does that mean that any design that is "testable" is necessarily good? I don't think that it is.
A good design is measured in things like coupling, cohesion, and how well the system hides its complexity. Because of their nature, things with high cohesion and low coupling are almost always very easy to test. As such, when something is not easy to test, most of the time one can conclude that the code does not have high cohesion and low coupling. However unit tests have no hope of telling us if we're doing a good job hiding complexity behind our APIs, or modeling our domain in an understandable way. Since by their very nature they should be isolated, they're unable to serve that function.
As such, when our tests start talking to us, I think we want to listen not so that we can learn how to test them better, but so that we can learn about the flaws in our design, and perhaps even our domain knowledge. They're likely telling us that we're missing something in the way that we're modeling the problem, and begging us to rethink our approach.

A Concrete Example

So let's get concrete, and look at the particular problem of slow tests. When our test suites get slow, why are they slow, and what can we learn from that? In my experience the problem is likely one of two things. First, we might be needing to instantiate too much of our application in order to get it under test (and hence the whole thing is coupled together). Or second we're likely explicitly dependent on some external bit of code that is notoriously slow, for instance the database in a Rails application.
In the case of the former, we want to try and separate the concerns of the application around the lines of the domain. There are likely concepts that are hidden that are needing to be expressed, named, and modeled. Unless we do this separation we're not going to be able to learn about what is hidden within the how. Once we discover what these concepts in isolation from the other parts of the system, it should be easy to model them in a decoupled, and hence testable way.
In the case of the latter, we're not separating out the concerns of the domain from whatever the concerns of the third party software is. This means that we're needing to test that domain logic through the third party API. What's wrong with this? Well, again a part of our domain is hiding within this subsystem, coupled to this third party API, hiding the what amidst thehow. Just as in the case above, we want to extract that domain logic, properly name it, and hide how it does what it does behind the what of a clean API.

And so what now?

And so I want to encourage us within the Ruby community to make this slight course correction in the way we think about our isolated unit tests. Let's never have someone walk away from our code bases thinking that writing fast tests requires introducing abstractions specificallly for testing purposes. Instead, let's listen to our tests as a means of improving our domain models, and hiding the complexity of our systems behind them. Don't be satisfied with your designs just because they're "more testable".

Add New Comment

  • Image

Showing 54 comments

  • Two bits:
    1) "A good design is measured in things like coupling, cohesion, and how well the system hides its complexity"I agree with this, completely.  For that reason, developing a PORO that focuses on business logic and the application, so that it can be integrated with a structural detail, like a controller in a framework.  The framework is the detail, the business logic is the application.  In theory, applications should be able to operate independently of the framework.  That can ONLY happen if the framework is decoupled, which is to say dependence point from the framework to the application, and (ideally) never backward.  Hence, injections into the PORO are not only common, but a routine design structure in large-scale DDD-style apps.Such detachment of business logic from the framework, incidentally, leads to damned fast tests -- when testing the logic.  You don't need to test the function of the framework, just the integration, which makes that testing often faster by mocks.  A few slow full stack tests, and the world takes care of itself.  That is the consequence of a good design -- not the reason for it.However, slow tests are a BIG smell, suggesting that you have business logic depending on the framework, rather than the other way around.  Now, you may not care about this mode of architecture, but THAT is the focus of this debate -- not fast testing.  However, fast testing is the sensible way to sell these intuitions.And, no fast tests, no real TDD -- you will in FACT get tired of the testing, and either stop writing code, or writing code without tests.  Some of us feel that's a big, big thing.  Indeed, some of us think it is inherently unprofessional.  You don't have to agree with me, but please understand that these two issues: DDD-style application-centered architecture, rather than framework architecture; and XDD versus "I think it works, and the QA will catch the rest."  Fast tests is just the smell that comes from not following those principles.
    2) "Being the principled architect that he is, he's not willing to make that compromise [of adopting a structure yielding half again more klocs.]"I could not disagree more.  Concise is good, but KLOCS is a meaningless metric, particularly with respect to architecture.  Architecture has nothing to do with choice of frameworks, programming languages or particular code metrics or tools.  Architecture is application-revealing, not detail revealing.  More KLOCS for better code and better architecture is better.
    The reason for the dependency management style described above is to make it easier to change all those details are rejigger everything, persistence (type of database, not the brand), communication (queues, e-mail or messaging birds), UI, even the framework entirely.
    Of course, you make many valid points about defects in the code.  Code MUST be intent-revealing, particularly the use of objects.  On the other hand, part of the complexity you perceive may come from the fact that the code really isn't all application-centric, just heading that way under the coder's efforts.  Because its different from the understandings of others, some of the questions you raised may arise.
  • Andrew, I'm not at all against POROs, I've been using them in my Rails applications for over 4 years. My point is more along the lines of listening to the tests to tweak your design. Just because your using POROs doesn't mean that your doing a good job modeling your domain.
    I also totally agree with you that architecture has very little to do with the framework. I've also been doing "dependency injection" (although I hate the term) for several years. My point in the "principled architect" refusing to change is code was not that it was more loc, but in that it was completely failing at modeling the domain. The code above was introducing concepts that were not driven by modeling the domain, but producing "fast tests". The fact that the tests are slow are pointing to modeling problems. 
    I've run into very few architects that will actually have a problem with actual lines of code. After all, they're interested in modeling a system, which inherently will produce more objects than a single file. But the lines of code analysis is valuable, in so far as it can sometimes point to a lack of cohesion within the system. Which is what I was trying to point out.
    I hope you don't read me as someone opposed to architecture or TDD. I'm not. I'm one deeply attached to both concepts, which is why i want to see people listen to their tests to model their domain, rather than to make their tests faster.
  • Jason
    I've run into very few architects.
  • You seem to be of the view that logic not slated for reuse should be inserted into and be made dependent upon the framework.  Several reasonable people might disagree, thinking that the domain should be articulated in such a way that it has NO dependencies upon either the M, V or C of Rails, using the framework as a detail. 
    In an ideal sense, DDD would build a gem for the application (which I think you mean by your "domain" to be "modeled"), independently tested without reference to the framework, ultimately callable by the framework via its (or another) mainline (be the framework Rails or Sinatra or Django, the persistence being SQL, NoSQL or a file system, none of that relevant to the domain at all), and not the other way around.
    It is slower than hacking directly in the framework, but it has the virtue of freeing you to design your system with principled design structures.  Now, exigencies or bosses may wish to expect you to "go faster" (for now) and charge you demerits for writing "extra code."  At the end of the day, those same bosses come up with the changes that require variations in design that the stodgy old framework does not facilitate, because the code following the framework architecture has now become fragile, brittle and difficult to change without making many changes elsewhere.  Agreed that making the simple too complex (if I am just hammering out a blog with comments that will never change) is bad Karma.  On the other hand, if my system was large enough to require modeling a real domain with non-trivial complexity and business logic, a view to the future -- that is genuine agile development and design -- save much in the overall.
    Yes, a DDD approach does violate the "blog in 15-minutes video" principle that Rails so beautifully addresses, but when we are building a big system in Rails, we often hit that big wall once the system evolves beyond a few objects, where modification becomes complex and error-prone, and lose sight of our actors and use cases, now hidden behind the non-architecture for the "domain model" that is only the delivery mechanism.
    In this view, my domain that you purport to model, doesn't have any controllers in it.  It does have a use case for AUserPostingACommentToTheBlog.  I'm trying to post a comment to a blog, and test the logic therefor.  Putting that logic in the controller of my delivery framework, using delivery framework variables, has the dependency going in the wrong direction.  The new slowness of my tests and its imposition on TDD (that is, people just spiking code without writing slow tests) is just a code smell.
    Like I said, I don't know if the Use Case for AUserPostingACommentToTheBlog is part of a one-off trivial 15-minute blog (in which case this abstraction is senseless and I would agree with your criticism), or part of a more complex and dynamic system, one filled with many rules and actors and stories and use-cases, all of which require changes of different kinds throughout the life of the system.  The smaller and more trivial the system, the less I mind waiting for tests and simple spiking through the framework.  But if ALL my tests take minutes to run, and I am intending to build a monster system with agile methods, I get nervous about misdirected dependencies, particularly when the misdirected dependency is what is making those tests slow.
  • Respectfully, I think you've completely misunderstood me. I'm all for isolating yourself from the framework. We seem to be totally talking past one another.
  • If I have misunderstood you, I apologize.  As understood, you assail the code because you feel it is an abstraction just for testing, adding language not helpful for your modeling, and suggesting that it belongs instead in the Controller, and doesn't require a new object.
    Perhaps this will help us to identify where the impedance mismatch begins.
    But if I have your arguments right, here is where we disagree:  I believe that business rules code should be separately encapsulated, physically, from the framework code.  I further believe that the dependence should entail the framework depending upon the business logic, and not the other way around.
    For those reasons, I see the proposed code as an awkward form of Interactor (using Uncle Bob's renaming of the DDD concepts).  An interactor is not a controller is not an interactor.  It appears to be an effort in refactoring the system so that business logic exists both independently from the framework, and so that the framework depends upon the business logic, not the other way around.  In both of those senses, the proposed code seems a move in the right direction, even though I confess it entails more KLOCs.
    But as I noted earlier, I cannot pass on the propriety of this "abstraction," without a clear sense of the system design as a whole.  In a community of "spikers," without the intent to isolate the business logic from the framework, it is in fact inexplicable.  The example, as I noted, makes no sense if it is for adding a post to a 15-minute blog.  If it is part of a complex system, I take a different view.
    Are we any closer to joining the issues?
  • Perhaps it would be sufficient to say that I don't generally believe that it belongs inside of the controller. If the action is simple enough then the controller is sufficient, if not I want the update to flow through my domain layer. My complaint *is* that it is adding language not helpful for the modeling. However this is not because more language is bad, but because the particular language that has been chosen has to do with implementation rather than domain concepts. I've described why I think this is in a response to Justin above.
  • solnic, Senior Software Developer at Code Benders
    "Fast tests" are definitely not the goal. It is the outcome of a good design (or at least it should be). The problem with Rails is that it's hard to have fast tests given that in a typical rails project your code is tightly coupled to the bits of the framework that happen to be heavy and slow - specifically controllers and AR models. This is why people are trying to find ways of doing things "outside" of the framework, which is not easy as we have 0 conventions in Rails for doing so.
    The whole movement towards PORO is IMHO good as it makes people think more about the design. OTOH I can definitely see the danger of having a confusing and unnecessary up-front design.
    My approach is that I always start with "The Rails Way" and I pay close attention to what happens when the code base and tests grow. Once I have a good understanding of my domain I can rather easily create higher level abstractions using PORO and make my tests run faster.
    There are many people in our community that felt the pain of having really slow test suites which I think is something that drastically decreases productivity and that's the reason why we're so "obsessed" with fast test suites now.
  • Justin Ko
    You mention "domain modeling" multiple times. If you had read my comments in the gist, you would know I was not "modeling the domain", but modeling use cases. Please read for more info on this approach.
  • Justin,
    I'm sorry that I referenced your code, as it clearly has brought more drama to the post and made people think that I was opposed to architecture or TDD. I did not mean this post to be a knock on you, or your code particularly. I meant to use the code as a meta point to highlight the way in which we were talking about introducing architecture and the way we think about slow tests within the community, but I clearly didn't do this as well as I could have.
    In regard to your particular point about modeling use cases I'd like to push back on that a bit. 
    First off I haven't read Jim Gay's book, and I've only done a minor amount of reading around DCI, so please feel free to correct me if my understanding of what you mean is a bit off. 
    That being said, I'm not sure that I would want to draw a stark distinction between modeling use cases and modeling the domain. In an message centric, object oriented sense the domain is the sum the different types of processes that the system handles. The messages between these processes carry the data of the system named after concepts in the domain. In that sense a given action is going to cause a series of interactions (messages) between these processes which produces the completed results.
    In the Jacobson sense then wouldn't a use case be the sum of the messages going between the processes for a given case? The entry points for the "domain" can be named after a use case, and represent the interaction between external actors and the system. The use case has no need to know the internal workings or structure of the system, it just needs to take the data, and send a message to the proper entry point( or points) of the domain to start the interaction, then listen for the completion of the action.
  • Justin Ko
    By "modeling the domain", I assumed you meant "trying to model the real world" - nouns/entities, etc.
    This quote from you:
    > The code above was introducing concepts that were not driven by modeling the domain, but producing "fast tests". The fact that the tests are slow are pointing to modeling problems.
    Is why I thought that.
    Given the comment this is replying to, how can you determine the code is not "driven by modeling the domain"?
  • As I read my words there, it probably wasn't helpful to phrase it as if it was a moralistic judgement, and without qualifications like "it seems to me". Let me see if I can perhaps provide a more helpful critique, and perhaps explain what I mean.
    When I look at the PostComment object the first thing that bothers me is the constructor. It seems to take two entities (user and entry) and a value (attributes for a comment). Since this object seems to be a service (a stateless process) this seems wrong to me. It seems like the service should be responsible for "posting comments", but that can't be it's role because it takes a single user, a single entry, and a single comment value into it's constructor. It seems to be tied only to this single use case. 
    This makes me wonder why you felt the need to pass the user and entry and attributes into the constructor. I assume that this was for testability purposes, so that you could replace it with a mock in your test, and assert on what is going to happen to it. This may not have been your particular reason, but it is a reason that I've heard many times, and it bothers me. 
    I would rather pass something into the constructor because I have determined that within my domain this process under test requires this other sort of process in order to be able to operate. Then I would attempt to specify what behaviors the process that I'm testing requires this other process to be able to accomplish, and specify that in terms of a generic role (read interace) which would be represented by a mock object. This domain contract would then be represented within the system in the form of the test that I'm writing. Thus the service would be coupled not to a concrete class, but to another role within the domain, and that relationship is actually a domain coupling, not an implementation coupling.
    Another thing that bothers me is that within the body of the post method there seems to be several responsibilities. First it is creating a comment object (which seems to be an entity since it can persist itself) from the parameters that have been passed into the constructor. Second it is updating that comment to add some information to itself (language and spam, although you seem to have services that are implicit dependencies to do that). Lastly, when this comment has been "created" and "updated" there are several things that need to happen relative to this comment, and it is doing these things (sending an e-mail, sharing on Twitter, and sharing on Facebook).
    There each of these processes (translation from attributes, updating, and actions) are actually concepts within the domain, and in my opinion should be modeled as such. It is missing the concept of the comment being successfully posted, which is what the e-mail sender, Twitter sharer, and Facebook sharer are actually dependent on. Without the domain notion of a comment being successfully posted anytime you want to add or remove one of these services you have to change the "comment poster", even though nothing about actually posting the comment seems to be changing. 
    When I say modeling the domain, I don't mean finding all the nouns of the real world and trying to put actions on them. I mean taking the actions or verbs from the real world domain and making sure that they're represented within the messages going between processes. This is essentially what a use case is, is it not?
    I hope this helps clarify, please let me know if I'm still confusing.
  • Justin's point resonates with me.  Modelling the real world is unnecessary and futile, moreover leading to errors in design.  The classical Rectangle as a superclass of a Square is an excellent example where modeling the "real world," to no advantage by the way, leads to violations of the substitution principle.
    We model Use cases, Actors, Entities and Values.  The "real world" has no meaning directly helpful to system design.  The "real world" modeling derives from the simulation roots of OO, and really doesn't bear much outside that environment.
  • The "real world" has no meaning in the context of nouns, but it certainly does in the context of verbs. I agree that modeling data (real world entities) and the operations that can be taken on that data generally doesn't work out very well. Please read my reply to Justin's comment above, and see if that helps clarify what I mean.
  • You make many good points in the post that I agree whole-heartedly with. There was one that surprised me: "However unit tests have no hope of telling us if we're doing a good job hiding complexity behind our APIs, or modeling our domain in an understandable way. Since by their very nature they should be isolated, they're unable to serve that function."
    Unit tests do critique one important characteristic of your APIs directly: how isolated is the function under test? The number of other objects that need to be mocked out is an indicator of this -- more mocks indicates less isolation. How well is complexity being encapsulated? The complexity of the mocks speaks to this -- if a mock is returning another mock, that's complexity at some level that every client is required to provide. Is the modeling understandable? Read the dependencies that are set up for the test -- if there's a dependency that doesn't make sense in the context, that's going to be an understanding problem for later readers. ("Why does PostComment need a SystemQuota?" "Because comments are paced based on the last time a user posted." "Hmm... maybe that belongs on user? Or we could introduce a PostPacer?")
    In all these cases it's the code in the test that really reveals these things. By isolating the code for a single operation in one place, unit tests are a great tool for raising questions about the modeling, the complexity, the clarity or other characteristics of the API that you want to develop.
  • Dave, I would put that value under cohesion and coupling, which I said the unit tests can tell us. What I meant in that section is that the unit tests can't directly tell us if we're abstract at the right place, or modeling the right thing. For instance it has nothing to say about if something called a "PostComment" makes sense within the domain, or if that is actually mixing multiple concepts from the domain (although the code itself may low coupling and high cohesion). 
    Overall though I agree with everything that you said.
  • Is something like a `PostComment` object really an abstraction just for testing? I think many would say it's an object because it's wrapping a use case (an "interactor" object), not because it's an artificial abstraction for testing.
    Roughly, these kinds of things:
    I think it'd be interesting if you provided some specific thoughts on how to improve the code example, though I know this is tough to do in a blog post with limited context. Thanks for talking about this subject.
  • An other problem might be that the mock libraries in ruby make very easy to mock class methods. And people tend to couple different components to eachother, because there is no feedback from the test about this concern.
  • That's what I rely on my acceptance tests for, where almost nothing is ever mocked or stubbed.
  • The code you provided is a bad example. It violates at least 4 of the SOLID principles for OO and clearly would make some functionality harder to test. 
    To fix your class I would move the mailing, post to twitter, and post to facebook to observers (not necessarily ActiveRecord::Observer, but at least following the pattern). It's possible that LanguageDetector and SpamDetector could be implemented as observers as well, but I can't tell without seeing how they'd be implemented. Once you remove all that you're left with creating the comment - which could then be moved back into the controller and you'd sacrifice none of the things you've complained about (sure maybe there would be a few more lines of code - but the code would be much more maintable).
    I do however agree with you. Fast tests should not be the end-all goal, they are merely a warning light. If something is difficult to test or slow it could be an indicator that you're doing something poorly. Generally well crafted code is easy to test and potentially fast, but poorly crafted code can also be fast and easy to test as well...
        it "returns a random number" do
 == 6
        class RandomNumberGenerator
          def get
  • Giles
    Hey Greg, I'm guessing you did it to avoid drama, and I certainly think you had good intentions, but taking that guy's code and pretending you're talking about an abstract example? Really?
    To be fair to you, the flavor of the discussion you are pretending not to reference was pretty harsh, and we all know who made it unpleasant, and how, and I certainly don't want to invoke his name either, since he'll harass me, demanding that I talk to him, and then he'll imply I'm a stalker when I refuse. I don't even want to know how that was supposed to work.
    So one the one hand, I really think avoiding this issue is the type of drama aversion which causes more drama later on, but I don't blame you, and I'll go along with it (under protest) because it is your blog, not mine.
    I agree with your idea that it's important to differentiate between "why are the tests slow?" and "what are the tests telling me is wrong in the design?", and I think that's a very good point.
    Let's consider another hypothetical debate, one in which Experienced Rails Developer provides much simpler code, but doesn't show us his tests, which are written in a standard Rails style. This hypothetical debate looks like a victory for Experienced Rails Developer, but really ends in a tie, because look at all the code we're not reading. The Experienced Rails Developer had better code, but the Excited OO Developer had better tests. Experienced Rails Developer treated his tests as a rug beneath which to sweep the inelegance of his abstractions.
    In this hypothetical debate, the tests are telling you that Rails enables some clever tricks at the expense of good object-oriented design, which is something people have been saying about Rails since at least 2005, and which was the main motivation behind the Rails 3 refactor.
    I'm of the opinion that the Rails 3 refactor cost the Rails community some significant momentum as far as keeping up with modern web development. But what else could we do? If anything, it seems like we might need to refactor Rails again. Or maybe we say that cheating at object-oriented design was part of what made Rails successful from day one, and a Rails developer has to compensate for that, so if you're a Rails developer, you have to choose between slow tests and OO gymnastics.
    Maybe what we should really figure out is where cheating at OO makes sense, and where it doesn't. After all, Ruby has a lot of elements of functional languages too. Regarding it as pure OO is probably unwise.
  • Out of curiosity what do you think "cheating at OO" looks like? Are we cheating in a functional way or a procedural way? Also, I'm not sure I see such a stark distinction between OO and FP. It seems like objects are just modules in FP, at least the way I generally write them. Do you see that differently?
  • Yep, I think the biggest mistake that I made in the blog post is referencing the already existing code. I should have come up with a totally abstract example. It's created more drama than I figured it would save by not directly referencing it, and has obscured the point of the post in a lot of ways.
    I think you bring up an interesting point in regards to choosing between slow tests and OO gymnastics. You do kinda end up building what Eric Evans calls an "Anticorruption layer" when you want to do message centric OO in Rails, and that definitely feels like gymnastics.
  • Jason
    Reading this article, and the comments below, I can't help but think of this comment from a recent interview that Alan Kay did with Dr. Dobb's (just a little out of context, not much): "I like to say that in the old days, if you reinvented the wheel, you would get your wrist slapped for not reading. But nowadays people are reinventing the flat tire. I'd personally be happy if they reinvented the wheel, because at least we'd be moving forward." -- the interview is at
  • myronmarston
    Good thoughts.  I generally agree with what you're saying, but I also thing that testability (and the speed of the tests) has important benefits on their own--so even though you should focus the code on correctness and a good design, having easily testable code that allows me to have fast tests enables me to do a better job at creating correct code and iteratively arriving at a good design, because it allows me to refactor ruthlessly.  Given two alternate designs, if neither is clearly better than the other but one is more testable, I'm going to pick the more testable one every time, even though it doesn't appear to be designed any better or worse.  So I'll freely admit to making choices about how to structure my code with testability in mind!
    "Good design" is hard to objectively measure.  "How fast do my tests run?" is very easy to objectively measure--and it has a _huge_ impact on my productivity.  On my current project, the time from making a code change and getting feedback from the corresponding tests is usually 0.5 - 1.5 seconds.  Having spent time on projects that had 45 minute test suites (with 30+ seconds of environment boot time until the first test runs), having this kind of fast test feedback is like some kind of drug.  I can iterate on my code and refactor so much faster than I've been able to on other projects.
    I think the community has been talking so much about fast tests not because they are an end, but rather because they make us so much more productive at producing working, maintainable code.  Blog posts like yours are a great reminder that fast tests shouldn't be the goal, and that tests should be used as a design tool (something I think I'm very poor at but starting to get better at).  I suspect that once I get better at using tests to drive my design I'll stop thinking so consciously about fast tests and testability and more about what my tests are telling me about my design, but I'm not there yet.
  • Your last paragraph is exactly the response I would hope for. My primary goal here is that people would realize that tests are a means to an end, and start trying to listen to them, even if they're poor at it to begin with. Thinking about the issue is what led me to grow a lot in the area over time.
  • myronmarston
     What are your thoughts about what I said above concerning the value of fast tests (apart from how the code is designed)?  That was actually the main point I was trying to make in my comment: fast tests are not the end, but I've found they make a huge impact on my productivity at advancing towards my goal of working, maintainable software.
  • I think I would want to agree, but with some qualifications. 
    My concern is that unless we're paying attention to the underling design the "fastness" of tests can actually be quite misleading. What I mean by that is in my experience when we're mocking concrete objects our tests often do not provide us actual assurance when we go to refactor those objects. They'll give us false negatives or false positives and the churn rate for the tests increases and slows our productivity. If I'm having to change my tests in order to change my code, then we're merely distributing the work time between running our tests and rewriting production code, and not actually saving any time.
  • myronmarston
     I think we're on the same page with different nuances :).
  • Wael Nasreddine, Design & Coding at TechnoGate & Agence Durable, Ruby on Rails / PHP, HTML5 / CSS3. System Administration Linux & Unix flavors.
    Is this post related in any way to this gist ?
  • I took the code from that gist, but I don't want to knock on justinko at all. I used it more to illustrate a trend of discussions I've heard.
  • Mike Pack, Ruby/Rails/JavaScript engineering for blood and gold
    I don't understand the intentions of this article.
    I believe I understand your underlying concepts and the point you're trying to express. I'm interested in teasing out your take-home message. I'll try to be succinct.
    First, I agree with the other commenters that the goal is to formulate a better design in lieu of fast tests. I won't reiterate that point. I think this is the take-home message you're trying to convey, however it doesn't come off cleanly.
    Second, I find your Rails vs OO Engineer discussion a distraction from your overall point. Even the vernacular used is meant to skew the tone. You have an "Experienced Rails Engineer" and a "Excited OO Engineer" which inherently throws "OO Engineers" into the bucket of overzealous hipsters. Discussions around proper object-orientation have been prevalent for quite some time. Why not lose the skew and make your point directly by comparing "Rails Engineers" and "OO Engineers," just sayin'.
    I do find strong delineations between "Rails Engineers" and "OO Engineers" but that's not even the point you're trying to make, as far as I can tell. You're advocating progressive enhancement and are concerned with premature design, akin to the purist of TDDers. Why is Rails involved in this at all? Are you trying to say that we should start with logic in the controller and when, and only when, the tests become slow or awkward do we move them into POROs? When the new PORO abstraction becomes slow and awkward to test, do we then produce higher abstractions and stronger decoupling?
    Third, the title of this article is an unnecessary distraction, if not plain wrong, though bait-y enough to get me here. All throughout you actually discuss the direct oposite of what the title implies. You actually advocate making code more testable, just listen to the tests as a prerequisite. In the case of the Rails vs OO Engineers, why isn't it a feasible response to say, "well, I did have it in a controller and the tests were slow as hell, so I moved it here?" You've lost a bit of context in this discussion, particularly what Justin Ko had been thinking while writing the example code.
    Forth, you've missed a huge burden on test duration: Rails startup time. Most Rails projects I've worked on have at least a 3 second startup time. If you're code starts in a controller and maneuvers its way to higher abstractions, you'll encounter the same damn issue 100% of the time. You'll always need to include the Rails environment and I'll likely see pigs with wings before I see the Rails + dependency stack boot up in a timely manner.
  • Mike, it seems like a good number of people have come to the same conclusions as you after reading this article so there must have been something confusing and flawed within my rhetoric. 
    I would consider myself an "OO Engineer", and I have absolutely no problem with immediately introducing POROs into the system. I've personally done that for years, and I never test my controllers because all their doing is wiring. The point I was trying to make is that when our POROs look just like a normal controller people are inevitably going to say something like "why not just put it in the controller?", and the only answer that you can give is "in order to speed up the tests". In my opinion this is almost always wrong. 
    We shouldn't structure our code so that it is "more testable", we should structure it so that it is well designed, and inherently it will be "testable". Discussions about design of code should never need to invoke "testability", since if it really is an improvement to the design itself, the "side effects" of "testability" should be visible themselves. The tests are helpful (and for me necessary) in the design process because of the feedback they give on the usability of the design, but the key thing to remember is that *they're giving feedback on the design*, and that's how we should hear what they're saying. Any improvement they suggest should be readily visible in the form of a superior design, not merely in the tests. 
    Further, when people are aiming at "testable", they will often be satisfied with their design before finding all of the domain rules or concepts within the code. In my experience this is generally because they end up mocking concrete objects rather than roles (see "Mock Roles, Not Objects" ), or mocking types they don't own (ex ActiveRecord, see the same paper). An example of this would be everything that takes a comment into it's constructor in the example code.
    I purposefully didn't directly point out what I thought the flaws of the example code was, because I didn't want the blog post to get bogged down in the details, and have people miss the forrest for the trees. After all, I wanted to critique the goal of "testability" rather than put forward a specific design alternative. But given the confusion, let me offer the following critiques of the example code:
    (1) The PostComment takes a user and an entry into its constructor, presumably so that it can replace them with mocks in its tests. These are not roles (read interfaces), but actual instances of objects. There is nothing but a single object type that I could pass into each of these. 
    (2) There is no real distinction between the constructor and the post method. The object can only be used for a single user, entry and attributes, which means that post is going to be called right after construction. I don't see why it all couldn't be put into the post method, or all put into the constructor. Maybe this is for readability at the call site?
    (3) Inside of the post method there are actually several responsibilities being carried out. First, its creating a comment object from the parameters. Second, its updating the comment to add some additional information (language and if it is spam). Third, when this comment has been "created" there are several things that need to happen, and its doing these things (sending an e-mail, sharing on Twitter, sharing on Facebook). 
    At the very least the object that is responsible for creating this comment should be emiting an event like "comment_created" which the emailer, and social sharers could listen to, and then decide for themselves if they should do their thing. There is no reason for the object that does this updating/creating to have a dependency on all the potential actions that take place when it succeeds. 
    My goal wasn't to stop people from going down the POROs road, but to push them further down it, and away from the Railsy sort of architecture (read procedural programming) into a message oriented domain model.
    Is that helpful to clarify?
  • I'm not sure I understand your argument. Testability is a part of good design, although there is more to good design than that. I'd be surprised if anyone would disagree.
    As far as I understand your argument, then, you seem to be saying that PostComment may be testable, but is not good design. You make three points why this supposedly is the case. I'm looking at these points and I'm not convinced.
    (1) Roles vs. Objects. That there is just a single possible class whose instances may play a certain role doesn't make the role any less a role. I don't see anything bad here. Neither do I see anything bad in making it easy to inject the objects taking part in an interaction.
    (2) Why are arguments passed to the constructor not to #post? To me that's obvious, but that may be because I regularly use a similar idiom. The constructor establisheds the context of instances involved in the process/interaction. If there would be anything specific to posting, it would be a parameter to #post. Say, the comment is not posted immediately, but is scheduled for posting at a specific time. I would either add an optional parameter to #post or add a new method, #schedule_post, with the time when the comment should be posted.
    (3) The #post method could be clearer. I would push more of its code down into the models. Everything up to and including spam checking probably ought to be handled in the Comment model. However, the CommentMailer and post_to_x methods are quite fine where they are. They are more closely related to the process of posting a comment than to comments in general.
    Does PostComment still look so bad? If the alternative was a post_comment method in a controller, I'd say PostComment comes out ahead.
  • I have to say I generally agree with Michael, but would add a few remarks:
    (a) The key benefit of passing arguments is to facilitate decoupling the PORO from the framework itself.  Not only for the purposes of permitting isolation testing, but to ACTUALLY isolate the logic from the framework detail.  The fact that this design feature facilities faster isolation testing is just a by-product.
    (b) On loading logic into the models, I don't know that I can agree, but far more context (indeed, definitions of use cases and the like) needs to be established to address that question.  In any case, for me to join this, the notion of a Model in this context would not include a descendant of ActiveRecord, because it would violate single responsibility and make testing unmanageably slow.  Again, the design defect leading to slower testing.
  • I'm fine saying that as an alternative to the method in a controller it "comes out ahead" in the sense that it doesn't require all of Rails to bootup, but I still think the design is flawed.
    Your critique of #2 is what I'm trying to draw our attention to. You said, "The constructor establishes the context of instances involved in the process/interaction". A service doesn't have dependencies on instances (entities or values), it has a dependency upon other services. It should be able to perform the service on any entity/value of a particular type as part of it's contract.
    I would also argue that the client of the comment poster service should have no notion of if the post is going to be processed immediately, or processed later. When you introduce that concept your leaking how your doing what your doing into your interface, which means that there is an implicit coupling between the components. If the post is not ready to be posted at the time that it is given it, then it is the responsibility of the comment poster service to know what to do, not the client of the comment poster service. 
    In regards to your critique of #3, I would argue that the CommentMailer and post_to_x methods ,are *not* a part of the process of posting a comment. Those are actions that sometimes happen *after* a comment is posted inside of the domain. In order for those actions to change in the future why does the comments poster need to change? Nothing about actually posting the comment has changed.
  •  Let's concentrate on where the issue appears to be. You content that "A service doesn't have dependencies on instances (entities or values), it has a dependency upon other services." I don't see why this would have to be the case. Is there any argument I have missed?
    I would not characterize the PostComment class as a service, to begin with. In the Jacobson's terminology, from Software Reuse, it is a control type::
    "A control object performs use case specific behavior. Control objects often control or coordinate other objects. A control type offers behavior that does not belong to an entity or interface type." (p. 428)
  • If I'm understanding you (and Jacobson) correctly then I'm not sure I see the necessity of these "control types". It seems to me like it's attempting to introduce some level of global flow control into the system so that you can have a single point per use case which controls the interaction between all objects. I tend to agree with Kent Beck and Ward Cunningham when they say, "The most difficult problem in teaching object-oriented programming is getting the learner to give up the global knowledge of control  that is possible with procedural programs, and rely on the local knowledge of objects  to accomplish their tasks." What I'm understanding you to be articulating relative to this PostComment class to me seems like a regression to a more procedural oriented style. Please tell me if I'm misunderstanding you (or Jacobson).
  • I would love to see some example code written in a way that solves the problems you outline here. Especially #3, with events and listeners. I find myself writing code like the example code contained in this post and would like to go further.
  • is something of an example. Not perfect or complete, but prolly sketches the idea
  • Justin Ko
    I'm confused. I'm confused because I'm not sure if you're joking or not....
    That code is insanely more complicated (listeners, processors, etc.).
  • Also quick note: the Rails startup time is actually a symptom of a design problem with Rails. You need to instantiate the whole thing in order to do your controller tests because they're coupled together. You don't want your individual components to be coupled to the entire Rails system, hence you decouple using a PORO. There are many concrete design benefits to this independent of the test speed. For example you want to be able to independently scale your domain model from your view rendering, likely putting them on different machines so that you can spin up more domain machines as needed.
  • "Like why does the PostComment take a user, an entry and attributes into its constructor?""
    Would the initialize method not need to accept user, entry and attributes, regardless of if they are mocked in testing, since the user is posting a comment to an entry. Therefore it needs all three pieces of information.
    +1 Great article.
  • It seems very odd to me that it is passing values, and not services into the constructor. It seems like it is leaking the structure of where it is being used to me.
  • Can you provide an example of what you mean by value vs. service? How would the code example be improved in your opinion? Would it just be all in the controller, or would there be a different kind of object structure involved?
  • windock
    I believe, he meant something like Creation of services in this case is blatant violation of Dependency Inversion Principle.
  • Oleg
    Great post. Complexity just to chase some acronym is pretty bad.  Also, how slow is slow? I have a Rails project here with about 500 tests, over 2000 assertions. The whole thing runs in under 20 seconds. There are no mocking happening here either.
    Maybe you guys should lay off factories. That should speed things up.
  • windock
    Definition of "slow" depends on the layer of tests. Each layer usually makes tests 10x slower.
    So 1000 unit tests may run under a second, 1000 integration tests under 10 seconds, 1000 functional tests under 100 seconds. And 1000 black-box tests (with browser, real 3rd party services, etc) may take hours.
    The problem is in slow unit tests, because they guide development. They should be blazingly fast - parts of a second, giving fast feedback, thus leaving developer in a state of flow and giving him ability to refactor as much as he needs. Even 2 seconds is slow, 20 seconds - is a disaster. Developer would be afraid to refactor, since he would need 20 seconds to verify code. And without refactoring the architecture would not "grow"
  • Evgeni Dzhelyov
     The black-box test suite serve different purposes. Only a part of your tests suite, that correspond to your test strategy will be used for guiding the development and you want that to be fast enough so you don't lose focus while coding. Other parts of your tests suite will serve different needs and will be inherently more slow.
  • Nick Gauthier, coder, gamer, motorcyclist.
    Great post. Reminds me of the following concept from lean manufacturing: optimizing for local efficiency decreases global efficiency. Meaning that if you make a certain sub-system more efficient, it must do so by reducing the efficiency of the entire system. The goal is to have an efficient global system.
    As applied to code, we want to have the entire global system be optimal along the attributes of usability (from a code perspective), simplicity and readability, function, integrity, etc.
    But when you optimize a specific class for testability, you must reduce the entire system's quality in exchange. Testability is a nice indicator, like code coverage, but is not the true end goal of the system itself. A user neither seeks nor recognizes a product that has high testability. Their closest metric is functionality and error rate.
    Personally, I use tests as a way of describing a potential use-case of the code. I will write the test first as if I wished the code worked the way I described. I won't put any mocks or anything when writing the test. It's not until I need to mock something (an object external to the unit test) to get the test to pass that I will add a mock.
    Consider this: If you could write your entire test suite without any mocking at all, and without any optimization for testability, and that test suite could run in less than 1 second, would you optimize for testability and add mocks? No. Therefore testability is not a goal.
    Keep up the great work, love reading your posts.
  • Gregory Moeck
    I guess the slightly different thing for me is that I don't view mocks as a testing optimization. I view mocks as standing in for a decoupled role that another object is going to play. I use the "mock" to flesh out what type of collaborators the object is going to need, and define the interface that role is going to require.
  • Nick Gauthier, coder, gamer, motorcyclist.
    Sure. Your focus is still on the code, not on the test. You are working on writing code, and the test is helping you. As opposed to writing the test for its own sake, and then writing the code that works best with the test.
    You can write a test with mocks as "ok then my widget will weld a cog" and so create a mock for the cog. Because you have the widget and its behavior in mind.
    Personally I still don't mock much, because unless my tests start to take a pretty long time, I'd rather have the objects interact so I don't miss any refactorings.
  • You hit it right on the head with the first section.
    For me though, I use mocks quite often because I don't want whatever object that I'm working with to know what implementation of the collaborator it's talking to. I consider my tests to be defining the encapsulation barrier of the object. Whatever objects the tests know about, the production code must necessarily know about. Using a mock as opposed to a real object forces me to think what role the collaborator is playing, which I find forces me to decouple my objects. This makes it so that any object which implements the API of the mock can be swapped in via composition in order to change the total behavior of the system. If the API isn't generic, but specific that is much harder to accomplish.
    So it's really all about production code to me :)
blog comments powered by DISQUS
Fonts by Typekit

No comments:

Post a Comment