Results 1 to 9 of 9

Thread: How would you unit test this method?

  1. #1

    Thread Starter
    Member
    Join Date
    May 2006
    Posts
    60

    How would you unit test this method?

    I have a public method that I need to test.

    Code:
    // Order Service Class that has a public method to create an order
    ....
    public bool CreateOrder(OrderModel order)
    {
     ...
    }
    Then, a model like this is passed to the method and is used to create the order
    Code:
    public class OrderModel
    {
         public int orderNumber {get; set; }
         public decimal orderAmount { get; set; }
         ....
         ....
         public DateTime orderDate { get; set; }
         ....
    }
    Now, I'm using a third party API and my service class is basically a wrapper for the API.
    I call several methods of the API in my single CreateOrder method.
    The model passed to my method is dictated by the API.

    So, any field in the model may be invalid. (Like the order number, part number, class of the part, customer ID etc...)

    I could have several fields (in some models over 50) in my models (here I'm using an Order as an example. But there are several different models I'm working with).

    So, would I need to create a set of tests for every single field in the model??

    How would I go about this?

  2. #2
    Super Moderator jmcilhinney's Avatar
    Join Date
    May 2005
    Location
    Sydney, Australia
    Posts
    110,274

    Re: How would you unit test this method?

    Yes, you need to test that the appropriate validation error is generated for each piece of invalid data. Also, you shouldn't be invoking that API in a unit test. You should write your code so that the API can be mocked. That way, you can also write tests that ensure that your method behaves correct when the API call succeeds and fails without ever actually invoking the API.

  3. #3
    PowerPoster
    Join Date
    Jul 2002
    Location
    Dublin, Ireland
    Posts
    2,148

    Re: How would you unit test this method?

    You should aim to test every field in your model, both a positive test and a negative test.
    (i.e. test that does raise a validation error and test that does not)

    There are tools (such as "Pex") to help generating unit tests but for a few hundred tests I'd do it manually.

  4. #4
    You don't want to know.
    Join Date
    Aug 2010
    Posts
    4,578

    Re: How would you unit test this method?

    I'm late, but just found this subforum and want to chime in.

    You write tests to define what your program expects, to ensure that it does the right thing when it encounters things that aren't that. If we test valid and invalid inputs for every field of OrderModel, that means we have to write both tests for every method that takes an OrderModel. That's real stinky if you have a lot of fields times a lot of methods. So, what do we do? Get lazier.

    First, note that you don't need a positive test for each individual field. One test with all valid data should hit that case just fine. If you think about this case for a while, you realize there's really no way to isolate a case that's just valid data in one field, because it overlaps with n-1 "invalid data" cases. So if we have 50 fields, we need at most 51 tests. But if you have 10 methods that take an OrderModel, you need 510 tests. That's boring and tedious.

    The first thing that comes to mind is this helper method:
    Code:
    public void ValidateOrder(OrderModel order) {}
    Now here's something juicy to test. Maybe it throws an exception if one field is wrong. Maybe you create some object it returns to indicate the error. Maybe it's just a true/false. Either way, now you only have to write that set of 51 tests once, and maybe code up a generic one you can call on each thing that takes an OrderModel to make sure it properly validates:

    Code:
    public void Test_that_method_validates_OrderModels<TReturn>(Func<OrderModel,TReturn> method) {
        foreach (var badModel in _badModels) {
            Assert.Throws<SomeException>(method(badModel));
        }
    }
    That still stinks if there's 50 fields. So what can you do? Get lazier. You could consider T4 templates to generate a lot of the tests, but first you might consider a field-by-field analysis. Here's where I get controversial.

    Ideally, I like to validate things like OrderModel way before they get to methods like this, so I don't have to do the validation in the method itself. I want CreateOrder() to fail because of some exceptional system event, not because I passed it bad data. Someone creates the OrderModel, that someone should guarantee that they behave themselves. If you can't make them verify that, I understand, some APIs are hooligans and you can't trust them. But if you control the method that creates OrderModel, it's better to prove you can't return invalid data than to put the burden of validation on the rest of your program.

    As an anecdotal example: I decided years ago that I'd just stop using null as a valid value for anything in my programs. If I really need something to be in an uninitialized state, I use the Null Object pattern, but this is a very rare case. This means you will almost never find a null check anywhere in my code, unless I have to use a third-party API that returns it. Even then, the null check is only to replace it with a null object or immediately throw an exception. This has eliminated so much cruft in my code, I started expanding it to other things. I can ignore literally hundreds of unit tests on every project, because in general when I screw up and let a null slip by almost all of my tests throw NullReferenceException. When I make a factory method, and the output value has valid ranges, I write tests to ensure my factory throws instead of setting data to invalid ranges. That means downstream, I don't have to validate and re-validate my data. If I can enforce that my upper layers validate all data, then my lower layers can focus on doing the ugly things I'm hiding from the upper layers.

    If I am in a SUPER high-reliability scenario, then sure, I suck it up and write tests for cases I think are improbable, but dangerous. But if I can, through code review with a peer, decide that there's no logical way for an event to happen, I don't waste a lot of my time testing it. If OrderModel has 50 fields, you might decide only 20 realistically need tests.

    From a purity perspective, this is risky business that goes against the idea that 100% test coverage is the only way to be sure your code functions well. Instead, it says there's some reasonable level of confidence you want, and by striving to meet that level you can meet your schedule better. There's some interesting discussion in this thread. That's Mr. Kent Beck himself pointing out that he doesn't meddle with unit tests beyond a certain depth. I think with diligent thought, a reasonable developer can carefully select shortcuts that meet that reasonable level of risk.

    But by all means, if you skip a test and it causes a bug later, you aren't doing squat until you write the test.
    Last edited by Sitten Spynne; Aug 19th, 2015 at 07:27 PM.
    This answer is wrong. You should be using TableAdapter and Dictionaries instead.

  5. #5
    PowerPoster kfcSmitty's Avatar
    Join Date
    May 2005
    Posts
    2,248

    Re: How would you unit test this method?

    Quote Originally Posted by Sitten Spynne View Post
    First, note that you don't need a positive test for each individual field. One test with all valid data should hit that case just fine. If you think about this case for a while, you realize there's really no way to isolate a case that's just valid data in one field, because it overlaps with n-1 "invalid data" cases. So if we have 50 fields, we need at most 51 tests. But if you have 10 methods that take an OrderModel, you need 510 tests. That's boring and tedious.
    Maybe I'm misunderstanding you, but why would you do this? Assuming you're using annotations for your models, you could simply put invalid data in the model and call the validate method. You can then assert that the # of errors returns matches the # of fields you're validating as well as asserting that the messages come back as what you expect.

    For the 510 tests scenario, again, why? If you've created a service, you can simply mock the response and ensure that the .validate method has been called and that your method returned the value you expected in an invalid model scenario. Since .validate is a 3rd party method, there should be no need for you to verify that it works, only that it was called (or you created your own .validate which would be tested in another test). Similar to the idea that you don't need to call a web service, since it isn't your code.

    Quote Originally Posted by Sitten Spynne View Post
    From a purity perspective, this is risky business that goes against the idea that 100% test coverage is the only way to be sure your code functions well.
    I've also never been a fan of the 100% code coverage argument. I can't remember where I saw this, but it is generally my thought on code coverage.

    Early one morning, a programmer asked the great master:

    “I am ready to write some unit tests. What code coverage should I aim for?”
    The great master replied:

    “Don’t worry about coverage, just write some good tests.”
    The programmer smiled, bowed, and left.

    ...

    Later that day, a second programmer asked the same question.

    The great master pointed at a pot of boiling water and said:

    “How many grains of rice should put in that pot?”
    The programmer, looking puzzled, replied:

    “How can I possibly tell you? It depends on how many people you need to feed, how hungry they are, what other food you are serving, how much rice you have available, and so on.”
    “Exactly,” said the great master.

    The second programmer smiled, bowed, and left.

    ...

    Toward the end of the day, a third programmer came and asked the same question about code coverage.

    “Eighty percent and no less!” Replied the master in a stern voice, pounding his fist on the table.
    The third programmer smiled, bowed, and left.

    ...

    After this last reply, a young apprentice approached the great master:

    “Great master, today I overheard you answer the same question about code coverage with three different answers. Why?”
    The great master stood up from his chair:

    “Come get some fresh tea with me and let’s talk about it.”
    After they filled their cups with smoking hot green tea, the great master began to answer:

    “The first programmer is new and just getting started with testing. Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later.”

    “The second programmer, on the other hand, is quite experience both at programming and testing. When I replied by asking her how many grains of rice I should put in a pot, I helped her realize that the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.”

    “I see,” said the young apprentice, “but if there is no single simple answer, then why did you answer the third programmer ‘Eighty percent and no less’?”

    The great master laughed so hard and loud that his belly, evidence that he drank more than just green tea, flopped up and down.

    “The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.”
    The young apprentice and the grizzled great master finished drinking their tea in contemplative silence.

  6. #6
    You don't want to know.
    Join Date
    Aug 2010
    Posts
    4,578

    Re: How would you unit test this method?

    Maybe I'm misunderstanding you, but why would you do this? Assuming you're using annotations for your models, you could simply put invalid data in the model and call the validate method. You can then assert that the # of errors returns matches the # of fields you're validating as well as asserting that the messages come back as what you expect.
    Ah, yes, if you have a data validation method that returns that kind of response that makes one big fat negative case, but you also have to design your code a special way to make it work. I was thinking of validation code like this:

    Code:
    if (orderModel.Id <= 0) {
    	throw new OrderValidationException(orderModel, "Id is invalid.");
    }
    
    if (orderModel.Customer == null) {
    	throw new OrderValidationException(orderModel, "Customer is not specified.");
    }
    ...
    In this kind of validation, you don't really get to know if multiple fields are invalid, just the first one that is. So that's why I said one test per field. Now, if you implement it a different way, that changes:

    Code:
    var errorInfo = new OrderModelValidationInfo(orderModel);
    
    if (orderModel.Id <= 0) {
    	errorInfo.AddError("Id is invalid.");
    }
    
    if (orderModel.Customer == null) {
    	errorInfo.AddError("Customer is not specified.");
    }
    
    ...
    
    if (errorInfo.IsInvalid) {
    	throw new OrderValidationException(errorInfo);
    }
    This can usually deal with a single invalid item in one fell swoop. But it gets tricky if some of the fields are calculated based on each other, because you might not be able to legally construct objects that break every rule. It also might not be clear from your one-shot error test how you determine if every error is covered. I like for tests to make one assertion, and testing all invalid properties is a stretch because it's hard to understand.

    I'm not arguing, really. Your point is valid and represents an error handling strategy I forgot to consider because I usually don't use it.

    koan
    This is beautiful.

    I like to throw exceptions the moment I know data is invalid. This means usually my validation happens in constructor calls, immediately after initialization, and/or in factory methods. If I can guarantee that only direct, hard-coded constructor calls lead to invalid data, I can write tests to prove all paths to a direct, hard-coded constructor call reject invalid data. Once I prove that, the other 99% of my program can assume data is always valid.

    I wrote APIs for a long time, and it meant all my parameters came from user code I couldn't control. That meant EVERY method I wrote HAD to have property-by-property validations and very detailed error messages that also had detailed documentation explaining exactly what to do if you got each error. It was a nightmare, and made things very slow.

    So say the third-party API creates OrderModels and can sometimes return invalid ones. I don't like dealing with unpredictable APIs. I'd wrap every possible "factory" entry point that returns an OrderModel with my own method that performs post-validation. That means I have to test my validation code, but if I write a single ValidateOrderModel() at least I only need to do it once. After forcing all factory methods to go through my validation, the rest of my program can happily assume its OrderModels are valid.

    Sometimes that effort doesn't work. Sometimes the place where you display the errors to the user are separated from where you create the object, so you need to shuttle the "bad" data through several layers. That's cool, but also a good reason to consider wrapping OrderModels with a decorator that adds your Validation information. Then you can validate close to the source of the problems, and the rest of the program needs only check a "HasErrors" property before deciding what to do. It's a lot easier to mock out one boolean property than craft specific error cases.

    But the question was, "How would you test this?" and that's my answer. I found the question gave me pause, because in my code if I have an OrderModel it's already validated. I find myself far less productive when I can't make that assertion as a precondition.
    This answer is wrong. You should be using TableAdapter and Dictionaries instead.

  7. #7
    PowerPoster kfcSmitty's Avatar
    Join Date
    May 2005
    Posts
    2,248

    Re: How would you unit test this method?

    Quote Originally Posted by Sitten Spynne View Post
    Ah, yes, if you have a data validation method that returns that kind of response that makes one big fat negative case, but you also have to design your code a special way to make it work. I was thinking of validation code like this:

    Code:
    if (orderModel.Id <= 0) {
    	throw new OrderValidationException(orderModel, "Id is invalid.");
    }
    
    if (orderModel.Customer == null) {
    	throw new OrderValidationException(orderModel, "Customer is not specified.");
    }
    ...
    In this kind of validation, you don't really get to know if multiple fields are invalid, just the first one that is. So that's why I said one test per field.
    Ah yeah, it makes sense when throwing exceptions like that. I work in a web world so we typically are validating forms and returning all the errors to the user.

    Quote Originally Posted by Sitten Spynne View Post
    This can usually deal with a single invalid item in one fell swoop. But it gets tricky if some of the fields are calculated based on each other, because you might not be able to legally construct objects that break every rule. It also might not be clear from your one-shot error test how you determine if every error is covered. I like for tests to make one assertion, and testing all invalid properties is a stretch because it's hard to understand.

    I'm not arguing, really. Your point is valid and represents an error handling strategy I forgot to consider because I usually don't use it.
    When it comes to throwing exceptions, I typically only let them throw when I can't handle it. I actually wrote a wrapper for our MVC framework (nancyfx) that handles all uncaught exceptions, so we don't even try to catch something unexpected; when an exception is thrown, the user gets a 500 error page with a correlation ID to submit to the help desk, and all relevant information to the request and user gets logged into our logs. The only thing we need to handle is the error message if we are making an ajax call.

    Quote Originally Posted by Sitten Spynne View Post
    This is beautiful.
    Yeah, I think I saw it on VBForums the first time, but it definitely hits home. We recently ran into a situation where we paid a contractor to write code for us (in Salesforce -- which requires min 75% coverage for a deploy), the contractor wrote tests to hit the 75% coverage, but didn't bother to actually test that things were working properly. He assumed if an exception wasn't thrown, everything worked as expected. Since then I've been even more wary of just trying to hit a certain % of coverage.

    Quote Originally Posted by Sitten Spynne View Post
    I like to throw exceptions the moment I know data is invalid. This means usually my validation happens in constructor calls, immediately after initialization, and/or in factory methods. If I can guarantee that only direct, hard-coded constructor calls lead to invalid data, I can write tests to prove all paths to a direct, hard-coded constructor call reject invalid data. Once I prove that, the other 99% of my program can assume data is always valid.
    An interesting way to do it, definitely, but does that mean you wouldn't allow modifying the values after the constructor has ran? If you do allow that, would you simply make sure you call your validation again, or do you strictly enforce that everything is readonly and can only be modified in the constructor?

    Quote Originally Posted by Sitten Spynne View Post
    I wrote APIs for a long time, and it meant all my parameters came from user code I couldn't control. That meant EVERY method I wrote HAD to have property-by-property validations and very detailed error messages that also had detailed documentation explaining exactly what to do if you got each error. It was a nightmare, and made things very slow.

    So say the third-party API creates OrderModels and can sometimes return invalid ones. I don't like dealing with unpredictable APIs. I'd wrap every possible "factory" entry point that returns an OrderModel with my own method that performs post-validation. That means I have to test my validation code, but if I write a single ValidateOrderModel() at least I only need to do it once. After forcing all factory methods to go through my validation, the rest of my program can happily assume its OrderModels are valid.

    Sometimes that effort doesn't work. Sometimes the place where you display the errors to the user are separated from where you create the object, so you need to shuttle the "bad" data through several layers. That's cool, but also a good reason to consider wrapping OrderModels with a decorator that adds your Validation information. Then you can validate close to the source of the problems, and the rest of the program needs only check a "HasErrors" property before deciding what to do. It's a lot easier to mock out one boolean property than craft specific error cases.

    But the question was, "How would you test this?" and that's my answer. I found the question gave me pause, because in my code if I have an OrderModel it's already validated. I find myself far less productive when I can't make that assertion as a precondition.
    You're a much nicer person than I am. On the web services I've written, I take the users data and bind it to a model. If the model doesn't validate or throws an exception, the user receives a BadRequest status back. Nothing more... they should read the documentation to ensure they're passing the correct data. I gather you're talking about a desktop API though, rather than a web service. If it is a desktop app I stand more on the side of letting unknown exceptions be thrown, that way the user can see what they broke. Of course I try to handle everything I can, but I'd rather let the user see the stack trace rather than catch the error and throw a custom error message that may not help them.

  8. #8
    You don't want to know.
    Join Date
    Aug 2010
    Posts
    4,578

    Re: How would you unit test this method?

    Quote Originally Posted by kfcSmitty View Post
    An interesting way to do it, definitely, but does that mean you wouldn't allow modifying the values after the constructor has ran? If you do allow that, would you simply make sure you call your validation again, or do you strictly enforce that everything is readonly and can only be modified in the constructor?
    Yeah, it's sounding more and more like we operate in such different world's we're both kind of shocked by how the other operates.

    First off, the consensus of the software engineering world is that life is much easier when all data is immutable. In this idealized world, a class cannot exist if it is not valid. I haven't seen a language that makes that relatively easy yet, it's relatively hard to do fully immutable data in .NET languages. But I like the theory.

    All of my backend programs are the kind of thing that probably 90% of programming does. I receive data from Service A, parse it, turn it into data that Service B understands, then send it to Service B. In that case, I don't have much of a need to modify things outside of construction. In reality, I avoid constructors and have get/set properties everywhere, and know that calling a setter outside of a factory method is a sin except in rare, carefully understood circumstances. They're immutable by convention, not by implementation.

    If you're a web guy, you generally HAVE to submit data, validate it, then decide what to do. Even if you're using AJAX to not do a reload, you still have that cycle. On my end, since my UI's live and I don't have to talk to a server to do everything, I can set up my UI to use my validation methods to do things like disable any "Save" actions when required fields are blank. The only way out of my screens is to either provide valid data or choose some form of "Cancel" action. So a re-validation isn't really necessary, especially when everyone's using the same code for validation.

    Quote Originally Posted by kfcSmitty View Post
    You're a much nicer person than I am.
    Well, we had reasons to be so thorough. Our API was mostly published for "engineers who can program", not "programmers who are engineers". That meant we, as non-insulting as possible, assumed our customers were not seasoned developers and couldn't work their way through what a NullReferenceException might mean. Or that if we documented "Don't set these two properties to these values at the same time", 99% of customers would be confused why they got weird behavior when they did just that. So any "do not do this" had to be backed by something that parroted the documentation and pointed to the "right" choices for anything that might lead to failure.

    I like the idea. I believe in "the pit of success". But it's very, very tedious to communicate that way to API consumers. It's so liberating to not start every. Single. Method. Call. With a block of code like:
    Code:
    count.Validate().IsPositive().IsLessThan(Maximum);
    items.Validate().IsNotNull().HasAtLeast(count).ContainsNoNulls()
    ...
    I don't like dumping a stack trace on the user unless that stack represents code they can extend. If it's my code, a stack trace won't help them figure out what they did wrong. Maybe that's where I picked up my validation habits: I like to do what I can to ensure deep-layer exceptions can't happen or can be described in terms of high-layer concepts.
    This answer is wrong. You should be using TableAdapter and Dictionaries instead.

  9. #9
    PowerPoster kfcSmitty's Avatar
    Join Date
    May 2005
    Posts
    2,248

    Re: How would you unit test this method?

    Sorry, I've been really busy lately so I didn't have a chance to post.

    Quote Originally Posted by Sitten Spynne View Post
    Well, we had reasons to be so thorough. Our API was mostly published for "engineers who can program", not "programmers who are engineers". That meant we, as non-insulting as possible, assumed our customers were not seasoned developers and couldn't work their way through what a NullReferenceException might mean. Or that if we documented "Don't set these two properties to these values at the same time", 99% of customers would be confused why they got weird behavior when they did just that. So any "do not do this" had to be backed by something that parroted the documentation and pointed to the "right" choices for anything that might lead to failure.
    I honestly started out that way, but since I've gotten more and more into python, I've moved more towards a dynamic language way of thinking. EAFP and duck typing. I validate what I can (usually using data annotations for my models) and some basic null checks, but for the most part, I use my web handler and logger to log if someone does something wrong. It is quite a liberating way of doing things, but it is also highly dependent on my workplace -- right now all my code is internal (as opposed to the products I sold before), so I only have myself and the other devs I work with to worry about. We're all pretty religious with documentation, so it works out well and helps us push out our applications faster.

    Quote Originally Posted by Sitten Spynne View Post
    I don't like dumping a stack trace on the user unless that stack represents code they can extend. If it's my code, a stack trace won't help them figure out what they did wrong. Maybe that's where I picked up my validation habits: I like to do what I can to ensure deep-layer exceptions can't happen or can be described in terms of high-layer concepts.
    Yeah I can agree with that. I typically don't view developers as "users," though -- even when giving them libraries to use. I would never let the end user see stack trace, but I've been frustrated with external libraries before that give me "helpful" error messages that point you in the wrong direction, so in turn I typically just let a dev see exactly what happened.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width