Posted almost 3 years ago on May 31, 2010
Or, “success is the ability to change”.
By Steve Horn
Or, “success is the ability to change”.
My friend Jon is starting a training course in an effort to quickly bring developers up to speed on test driven development. As part of this effort, he has developed a unit test suite aimed at helping folks understand Rhino Mocks, a popular testing tool. After looking through the tests and seeing a lot of the Rhino API that I wasn’t familiar with, I wondered if there were similar undiscovered nooks in my favorite mocking framework: Moq.
After getting permission, I took his tests that use Rhino Mocks and implemented them with Moq. It was a great exercise for not only learning the two APIs, but also for getting a side by side comparison of the frameworks. The differences in most cases are subtle, and ultimately I found the choice between each to be a matter of taste.
I’m in agreement with Jon that the differences between mocks and stubs are mostly semantic, and are best reserved for academic discussions. I called all the variables in my tests “mock” because the class for creating mocks and stubs with Moq is “Mock”.
The reason I am such a fan of Moq is that the API “surface” is minimalistic. The methods I want to use most often are within clear sight and meaningful. There’s not a lot of extra jargon and noise that I have to think about. The methods that I use 20% of the time are tucked away in a manner that is still discoverable, but yet hidden enough to give me a signal that they’re not the first path I should choose.
When I want to create a mock with Moq I have one choice:
With Rhino I have several choices:
The more decisions I’m forced to make, the more disruptive the API becomes.
A few examples of pros/cons of the APIs
Creating a mock of a concrete class with virtual methods
…in Rhino Mocks:
The simplicity of Moq’s API shines. (Replay?*)
*Replay is a concept that was prevalent in prior versions of Rhino Mocks and it continues to necessitate appearances. The record/replay syntax is still available for backward compatibility, which adds to the verbosity of the API. Read more about record/replay.
Finding the arguments passed to a stubbed method
…in Rhino Mocks:
I prefer the Rhino syntax in this case. The concept of a “callback” is clear enough, but it is not clear that I should be using it to perform checks on passed arguments.
There are some features that are exclusive to the current versions of the libraries:
Rhino Mocks provides full support for methods with Ref arguments. Moq does not seem to. (I have an ignored failing test in my code. If you know how to make it pass please let me know!)
Mocking Protected Methods
Moq provides built-in support for mocking protected methods. You can still test protected methods with Rhino mocks, but you’ll have to write some reflection code to do so.
I favor the style and feel of the Moq API for some of the reasons described above, but as they say: “To each his own!” Hopefully you’ll find the side by side comparison helpful in determining your preferred mocking framework.
Have I missed anything? Please let me know if I’ve misrepresented or missed critical information that would help to clarify this comparison.
Software development is full of wicked problems:
“[A wicked problem is] a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. Moreover, because of complex interdependencies, the effort to solve one aspect of a wicked problem may reveal or create other problems.” (http://en.wikipedia.org/wiki/Wicked_problem)
So I began to ask myself how other industries handle wicked problems. The root of the problem is uncertainty. It’s hard (impossible?) to be certain that the designs we’re generating are going to produce an elegant solution. One example I found was the healthcare industry. Here’s one coping strategy:
“If we are uncertain about the relative intrinsic merits of any [different]
treatments, then we cannot be certain about those merits in any given use
of one of them – as in treating an individual patient. So it seems irrational
and unethical to insist one way or another before completion of a suitable
trial. Thus the answer to the question, ‘What is the best treatment for the
patient?’ is: ‘The trial’. The trial is the treatment. Is this experimentation?
Yes. But all we mean by that is choice under uncertainty, plus data
collection. Does it matter that the choice is ‘random’? Logically, no. After
all, what better mechanism is there for choice under uncertainty?”
Ashcroft R. Giving medicine a fair trial. British Medical Journal 2000;320:1686.
This author’s antidote to uncertainty in medicine? Learning, trial, and experimentation.
We need to test the viability of the new ideas we come up with so we can choose the best one. If only there were only some way to apply this practice to software development!
Since the release of ASP.Net MVC, one of the major talking points has been: When should I use MVC and/or webforms, and will it save or cost me time. These are my thoughts.
Consider the following:
Velocity is the number of feature hours (or feature points) completed in a given iteration (or time period – given that the time period is constant).
Why does it take longer to gain velocity with MVC?
As ASP.Net web developers we’ve been developing with a mammoth abstraction that’s made a ton of our design decisions for us. To achieve velocity with webforms, you really needed only a cursory understanding of how the web works. Contrast that with ASP.Net MVC, which puts developers closer to “the metal” and forces us to make intentional decisions about the design of our apps. Because of this freedom (and power) we’re also responsible for making sure we don’t make the wrong decisions, so it takes a great deal of critical thought and planning to make it what you want it to be.
Once your design and opinions have been established, velocity is likely to increase and then stabilize. Why? Because if you’re doing it right, then you’re designing with SOLID* principles in mind (something that’s extremely hard to do with webforms). Complexity is minimized and entropy is mitigated.
What’s with the sharp decrease in velocity with webforms?
Because it’s so hard to create SOLID* apps with webforms, complexity grows and grows until it’s out of hand. Release cycles go from weeks to months to quarters, and then the app that the company invested millions of dollars in needs to be rewritten because the cost to maintain and/or extend outweighs the cost to start over.
What do you think?
*SOLID principles are not new and whizbangy. They’re a different name and face on what’s been known about OO since the first OO languages, and what every CS college student should have learned in OO design 101.
I've been doing the software consulting circuit for a while, although not too long - about 2 years. It's enough time to know that there's a lot of room for improvement, and I'm going to discuss in this blog post what trends I think will emerge as the industry matures. Many of the points I'm going to make are based on observations I've made about software quality, and how organizations will begin to value it more.
Too many projects end in something less than success. It's time that we change perspectives on software development and become fanatical about achieving our customer's goals. What I think this means is that we completely change our business models. Instead of the old-school fixed bid or time and materials contracts, let's go for more of the incentive based contracts. In other words, I get paid when I've made you money. Doesn't that take a huge amount of trust? Yeah it does, and that's where I think we need to get. (You can do some more reading on "Agile Contracts" here.)
Incentive based contracts encourage a healthy economy between partners. There's only one goal: to make each-other successful. Now when I come to my partner and ask if it is acceptable to take care of some technical debt, or begin using Test Driven Development, he/she will know that I am only doing it because it is a true investment - and I know that it will benefit me (us!) in the future.
Agile, Scrum, XP, Waterfall - they all have one thing in common: they don't mean the same thing to any two people. It's all about communication. We have got to stop applying tools to the process by default and instead have one goal in mind: Keep Improving. I think we'll see companies start to understand that in order to manage a software team you have to let the software team do the managing. There are no better people to make management decisions other than your team. They're on the front-lines with the most comprehensive knowledge available, and will always make the best decision possible.
Teams will still require leadership, but leadership should begin to lean more and more on their people to do the critical thinking and solve not only software problems, but also process problems. In summary: More thinking, less blindly applying tools and/or process.
It sounds reasonable to believe that a developer is a developer is a developer. You always hear how everyone is replaceable and we're just all cogs in the wheel. I'd argue that, however. Developers who have built relationships, established trust, and learned strengths and weaknesses will outperform any other team that has been banded together on-the-fly. Managers and decision makers will eventually begin to see this as they make decisions on who will build their software. Thus you'll see more software teams grow and strengthen and their services will become more valuable.
As people mature in their careers as software developers, skill sets will mature to the point where it will become expected that feature development will occur in a vertical fashion. By this, I mean that each developer on a team should be able to construct a software feature from top (UI) to bottom (Database) with minimal intervention from other developers. There will always be developers with talents that weigh more on user interface design versus skill in SQL Server (for instance), and they will be valuable as team members. But, every team member should have the ability to work independently to provide value to the paying customer.
I've come onto a customer's site on a staff aug contract to be seated at a computer that my mother would have grown frustrated with after using a word processor. This is an edge case for sure, but hopefully it illustrates that our clients don't always understand the need for quality tools. I don't blame them - it's not their area of expertise. We are brought in to fill a need, and we know the most about what we require - so we should be the ones to provide the tools necessary to do the job. There are other trades that require their craftsmen to provide their own tools, take for instance mechanics or construction workers.
We know this: software does not fail to meet customer expectations because of technology. Our first goal shouldn't be to start writing code, but to understand the domain and help our customer make more money. We should be looking at their model and suggesting improvement points. We should be more than the geeks who write code, we should be looked at to solve challenges - technical or otherwise.
In a couple of companies that I've consulted for on a staff-aug basis I've seen software systems that were almost literally being held together by duct-tape and paper mache. The observation I made was that the company was hiring contractors almost exclusively and that these hard working people would swoop in, get a cursory understanding of the system, and make patchwork fixes. These fixes were always just good enough to get the system operational again. And that is probably noble of them because everyone knows that in order to prove you're worth your salt is to be the hero and save the day. This model has got to end.