Visual Testing with Java

Testing or writing tests is today part of a developer's day like writing the real source code. One reason is the popularity of the existing testing frameworks like TestNG or JUnit in the java landscape which allow to run tests automatically. New methods like TDD (Test Driven Development) or BDD (Behavior Driven Development) even go one step further and combine the writing of source code and testing to a comprehensive approach.

Nevertheless the feeling remains that testing is more an annoying duty than equivalent work. So most developers would confirm that they like writing source code, but if asked about testing, they would probably shake the head. And that's an interesting question: Why do we do not like to write tests?

But first of all, we will shortly address the fundamental criticism that testing generally needs (too much) time which will be missing for developing the real software. Even if there is some agreement today that writing and running unit tests makes the software development more efficient on the long run, because extending or refactoring becomes easier and less errors will be introduced, the statement contains some truth: It is a fact that we will be slower in developing a software component if we have to write unit tests besides the productive code.

Starting with this consideration, we can conclude the main task of the perfect test framework: it must make writing and maintaining tests as efficient as possible, so we can spend more time in developing the real software. And of course it would be nice if the testing stuff would be more fun. It is not only the time spent which bothers, but also the fact that some work in the testing business is just boring and annoying.

Testing Today

To understand where the problems with the tests today are, we have a look at a simple example which tests a join method and consists of one positive and one negative test. If we realize this with a framework like TestNG, the resulting code look like this:

public void testJoin() {
   assertEquals("Hello MagicTest", join("Hello", "MagicTest"));
   try {
      join("Hello", null);
   catch (IllegalArgumentException e) {

public static String join(String s1, String s2) {
   if (s1 == null || s2 == null) {
      throw new IllegalArgumentException("Null strings are not allowed");
   return s1 + " " + s2;

At the first glance it is obvious that positive and negative tests must be formulated in a different way:

  • In a positive test, the check for correctness is done using assertions. This means that the expected result must be integrated into the test code.

  • In a negative test, where we expect an exception, we must catch this exception using try-catch to prevent that the test ends prematurely.

The reason why these tests have to be formulated so differently, is tied to the definition when a test is considered as successful: A test is then considered as successful if it can be run without an assertion has failed or a not excepted exception has been raised. So the test can - and this is the big advantage of the test frameworks - be run automatically. The appearance of the test code is obviously influenced by the features of the programming language because if we have a look at a test plan, the two tests look pretty much the same:





"Hello", "MagicTest"

"Hello MagicTest"


"Hello", null



To make working with negative tests less tedious, various attempts have been made:

  • Both frameworks offer the possibility to move negative tests into separated test methods where the annotation @Test must have the a specific attribute like expectedExceptions or expected. In this case for each negative test a separated test method must be used.

  • JUnit introduces so called rules with ErrorCollector as a standard implementation - kind of a soft assert (

  • A "soft assert" feature is also discussed for implementation in TestNG (

But whatever approach you choose, negative testing remains cumbersome.

Testing Yesterday

In the age before test frameworks, there were basically two possibilities to test a created program on unit level:

  • You checked the correctness using the debugger by inspecting variables and states during the program run

  • You added trace statements at the relevant places to document a program run. After running the test, you checked the created output for correctness. And if you had finished testing, you just had to remember to remove all trace statements again...

Both approaches rely on visual comparison done by the developer. This has the advantage that you did not have to write additional test code, the productive code was all you needed. The big advantage however is the fact, that this kind of testing cannot be done automatically. So if you had to re-test software after some changes, it was necessary that the developer did the testing once more with the same accuracy - and this becomes less likey with every test iteration.


MagicTest does automate this visual approach. It is the idea that the developer only has to make this check manually the first time - afterwards the test framework shall do this automatically. To make this possible, the test program must dump the necessary information for this check during the run. Which information we need, we can see in our test plan. So we end up with the following code for the first call to the method join in our example:

try {
   printParameters("Hello", "MagicTest");
   String result = join("Hello", "MagicTest")
catch (Throwable t) {

However this looks obviously more tedious than our original test code. But we have nearly reached our target: With some instrumentation magic on bytecode level we can do without all that boilerplate code in the test and just have to write the actual method calls:

public static void testJoin() {
   join("Hello", "MagicTest");
   join("Hello", null);

The annotation @Trace instructs MagicTest to instrument the method under test. The method under test is per default determined by using naming conventions. If the test method is named testJoin, we look for a method under test with name join and instrument these calls. This convention can be overruled using annotation parameters.

If we then run the test, the instrumented test code sends the needed information to the test framework which will collect and save them as files. These reference files can then be stored under version control together with the source code. We therefore have the following activities for running a test:

  • The test program dumps information during the run which are collected by the test framework.

  • After the test run, the test framework compares the actual output with the stored reference output.

  • If the outputs are equal, the test is successful. If they are not equal or the test method throws an unexpected exception (only those exceptions are caught automatically which are thrown during a call to the method under test), the test is considered as failed.

The visual approach has also changed the definition when a test is considered as successful. If a test was not successful, the developer must check the actual output by comparing it to the reference output which in most cases should be correct. The reference output can be wrong in the following cases:

  • The test method has been changed.

  • The specification of the method under test has changed i.e. it is expected that a different result is generated for the same method call.

  • There is still no reference output as the test is run the first time.

Even if the reference output is wrong, the actual output must be accurately checked for correctness as it could be that it is also wrong! If the actual output is correct however, it can be saved as new reference output. In all other cases the reference output is correct so the test method must be adapted to produce the expected reference output.

Now we want to run our test program the first time with MagicTest. The easiest way to do this is to use the MagicTest Eclipse plugin with the command Run As / MagicTest. After running the test, a HTML report is displayed:

This looks now pretty similar to our original test plan! However the red coloring indicates, that the test has failed. This is of course correct because the test has been run the first time and the test framework cannot know the correct result. So it is now our task to check the report visually for correctness. As it is correct, we save the actual output as reference output. This can be done directly in the HTML report using the Save button. MagicTest now consideres the test as successful.

Thereby we have written and run our first test with MagicTest. After this simple example we want to analyze how the approach behaves in more complex examples and over the whole life cycle of software development.

Complex tests

As we have seen, the expected results must be included in the test code with the tradition assert-based approach. As it is obvious for simple static methods what has to be checked (namely the result), this becomes unclear for complex objects where instance methods are called.

Let's have a look at an example: I implement a method which should remove all namespaces from an XML element. What do I have to check now to be sure that my method works correctly? Certainly I have to check that the passed element and all its child elements do not contain any namespace. At the same time I should also check that all child elements and attributes present in the passed element exist also afterwards - after all my method should only remove namespaces and not elements or attributes.

Using the traditional approach the developer has to check all these facts using assertions. In our example it can easily be that the complete check is even more complex than the removal of the namespaces. So it can happen that the developer will only check the absence of the namespaces.

The visual approach of MagicTest simplifies the checking of the designated properties drastically. All information dumped during the program run are automatically integrated into the test. In our example it would be the simplest solution to write out the XML element as text. Per default MagicTest dumps for parameters and result the value returned by the toString method, but we can use Formatter to change this behavior for certain classes. We will have the following test code and test report:

static String formatElement(Element elem) {
       XMLOutputter serializer = new XMLOutputter();
       return serializer.outputString(elem);

@Trace(parameters=Trace.ALL_PARAMS, result=Trace.ALL_PARAMS)
public void testRemoveNamespaces() {
    Element elem = createElemWithNamespace();

We can also see the strengths of the visual approach if a lot of data has to be checked. As an example we have a look at a method which queries data from a database. To guarantee the correctness of the method, we have to check the returned data. It becomes quickly clear that this cannot be handled reasonably by writing assertions as you had to included all the data in the test code. A solution would be to execute the queries manually or using a helper program and store the results in files. In order to use them for comparison, you could use a helper class like FileAssert which supports comparing of file contents. But this is quite an overhead. When using MagicTest, such detours are not necessary. The only thing I need is a formatter to dump the result data out as text and they are already included in the test report.

Until now our test method has only traced a single method but it can make sense to trace multiple methods at once. In our next example we use a regular expression to select all methods or a class for tracing. It is now possible to picture whole use cases in a single test method. The created report will then show the details of each single test step.

   parameters = Trace.THIS|Trace.ALL_PARAMS,
   result = Trace.THIS|Trace.RESULT,
   title = "Trace all methods of Concatenator")
public static void testAll() {
   Concatenator c = new Concatenator();
   c.concat("b", "c");
   c.concat("d", "e");

public static String format(Concatenator concatenator) {
   return "[String='" + concatenator.toString() + "', Separator='" +
      concatenator.getSeparator() + "']";

Maintaining Tests

There is one aspect of testing which we have ignored until now: maintaining the tests. Like productive code, tests must not only be created but also maintained. Unfortunately the traditional assertion-based approach does not offer assistance in this area. Lets regard the example that the specification of a method changes in such a way that all test cases will return a different result. It is now obviously a problem that the expected results are integrated in the test code. It can now easily be that the real change (because our code is well structured) can be done at one single place, but after that still all results in the test code must be changed. To conclude: the real code change is done in a few seconds, but maintaining the tests takes a lot more time! Of course this does not motivate the developers to write many unit tests.

There is a second problem: failed assertions abort the test run. If I forget to make a change or make it wrong, the test method stops with an exception and I do not know whether the rest of my changes was correct or not. The only way to get this knowledge is to correct the mistake and run the test again - and to hope that there will be no further mistake stopping me...

MagicTest also solves this problem: after the real change has been made, the test is run again. The effects of the change are now all displayed in the HTML report. If the output is what we have expected, all test results can be confirmed by single mouse click on the save button - without having to change anything at the test code.

To make the changes easily recognizable they are visually highlighted as known from diff tools. The next illustration shows therefore the report we saw before once again after we have changed the separator character of the concat method. This simple change has influence on the majority of the method calls - and this is shown intutively by the test report. You can also see that the last call must have been newly added.


There is another objection to the traditional tests: The real information is only accessible to the developers. The reason is that the reports created by the test frameworks contain to less details as you can only see which tests have been successful but not exactly what has been tested. So it is often necessary to write also a test plan in addition to the tests to document what has been tested for coworkers, managers, or customers. The detailed test report of MagicTest can solve this problem as it lists the details for every single method call. This report can even be helpful for the developer itself, if he has later problems to figure out what he has exactly tested due to the slack conventions used on writing tests.

Unit and Characterization Tests

The annotation @Trace used until now is targeted for unit tests. The approach of visual comparison however is also very well suited for characterization tests. These two kind of tests have the following characteristics:

  • A unit test checks the functionality of a single software module.

  • A characterization test documents the real behavior of an existing software package. A package can be a single module or a whole application.

While unit tests are typically written with the module itself, characterization tests are used as safety net if software must be extended or changed which has not enough unit tests. They are then written to document the actual behavior so errors can be detected using regression tests.

MagicTest support characterization tests with the annotation @Capture. With this annotation, text written out by the program to an output channel like System.out are collected and stored as actual output. The following mini-example shows a program which just dumps some static texts. If you have look at the created report, you might however guess the possibilities offered by this feature as MagicTest now detects any change and makes them visible. To make use of this feature for existing software, is is often enough to add trace calls to the important locations to dump the relevant information.

      title="Capture with output type PRE")
public static void testCapture() {
   System.out.println("Line 1");
   System.out.println("Line 2");
   System.out.println("Line 3");
   System.out.println("Line 4");
   System.out.println("Line 5");


We have seen that testing with the traditional assertion-based approach offers the advantage of automating, but has also quite a few drawbacks. These are addressed by the visual approach featured by MagicTest. Testing becomes more efficient and makes more fun.

MagicTest offers improvements in the following areas:

  • Positive Tests: No assert statements are necessary to integrate the expected result in the test code. This does not only reduce the amount of typing, but also makes maintaining the tests easier.

  • Negative Tests: Negative tests are as simple as positive tests, i.e. there is no necessity for workarounds like try-catch or separate test methods.

  • Complex Tests: Complex objects and huge datasets can be checked by simply dumping the relevant information.

  • Maintaining Tests: The effects of all changes are shown clearly arranged in the report and can be visually checked and confirmed.

  • Reporting: The HTML report contains the details for every method call. So it is clear for anybody what exactly has been tested.

MagicTest can be used in different ways. The most comfortable for developing is the use of the Eclipse plugin. Aside there are also command line programs which can be used for the integration in continuous integration systems. An integration with TestNG allows the further use of existing test suites. Further information about MagicTest including downloads can be found at the website For the first steps, you should use the provided example project to enjoy the new magic of testing within a few minutes.