Many developers assume they know how to write a good test, but in my experience few developers really do. They test units of code rather than units of behavior and their tests become implementation dependent and break when the code is refactored. Rather than providing a safety net for developers to catch errors when refactoring, these tests break when code is changed, even if the behavior hasn't changed. Instead of supporting us when refactoring, implementation dependent tests end up just causing extra work.
A good test defines behavior without specifying implementation details. It states the result it wants, not how to get it. For example, if a document can be sorted we may write a test for this by passing in an unsorted document and validate the document we get back is sorted. The document doesn't care how it’s sorted, just that it is sorted. If we later add a business rule that says, “For documents under 100 lines long use quick sort, otherwise use bubble sort,” we'd add two tests, one to validate quick sort was selected for sorting a document 99 lines long, and another test to validate bubble sort is selected for a document 100 lines long.
In this case, since we're testing a boundary we need two tests, one below the boundary and one above. We generally pick values on the border, in this case 99 for below and 100 for above. Notice, we don't have a test for 98 because that would behave like the test for 99. Likewise, we don't have a test for 101 as it would behave the same as the test for 100.
If we built our sort algorithms test-first then we may have tests that reach into the implementation of “sort” but that's not Document's concern. From the document's perspective it's only concerned with whether the document is sorted after sort is invoked and from the document factory's perspective, it's only concerned that the right sort algorithm is invoked based on the number of lines in the document. These are separate concerns that end up being separate, implementation-independent tests.