AFC - Abacus Formula Compiler for Java

Automated Formula Tests

The spreadsheet files in


contain formula tests. AFC’s build system runs them automatically for all the supported numeric types and with and without caching enabled. It also makes them citable by writing out documentation fragments describing them.


Here are the tests for ABS():

1 Expected
Actual Inputs # of Inputs Name Highlight Excel says Skip for Custom check true
2 1 1
-1 1 ABS ABS
3 0 0
0 1
4 1 1
1 1
5 2 2
-2 1
6 2 2
2 1
7 0 0
false 1
8 1 1
true 1
9 3 3
-3 1
10 4 4
-4 1

And here is the documentation produced from it. Note also how the list of supported functions in the reference index automatically includes ABS.

The omitted columns O-Q are macro-generated. They check that the expected and actual values really do match, which is then globally ascertained in the header. Here’s one row of them:

2 true


This spreadsheet has one row per test case. Blank rows are ignored. The columns have the following meanings. As examples, just look at what I inserted for the ABS() tests in the sheet above.

Expected (A)
Holds the expected result of the computation. You should always fill this cell by copying the value from column B, and then pasting it to column A using Paste Special → Values Only.
Actual (B)
Holds the formula to be computed by AFC. It may reference constant values and any other cell. Normally, it references adjacent cells in columns C through I.
Inputs (C-I)
Typically holds input values to the formula.
# of Inputs (J)
The number of cells in C-I which AFC should bind to input methods. This is indicated by a conditionally formatted green background on those cells in Excel. When a cell is bound, it still returns the value that is given for it in the sheet, so the expected result remains valid.
Name (K)
The name of feature being tested and documented. Used as a subtitle in the generated documentation. Following rows with no name in this column are considered to be part of the last name’s tests. (See below for the meaning of ... here.)
Highlight (L)
A substring that should be highlighted in the formula as rendered in the generated documentation. Subsequent rows with nothing in this column reuse the last value. Putting xx into the column clears the highlighting. You can add more than one substring, separated by spaces. The first such substring is automatically added to the list of supported functions in the reference index.
Excel says (M)
When AFC returns something different from Excel, you can document it here. This will be cited in the user documentation. Typical usage is when AFC returns 0, but Excel returns an error like #NUM!. You should then set this column to '#NUM! (note the tick at the start).
Skip for (N)
If this column contains any of double, big, long, then the test is not run for the corresponding numeric type. You can list multiple types here. Use this when, for example, the precision of long is not sufficient for a particular test case.
Custom Check (O)
Can contain an expression to replace the default comparison of actual vs. expected values in column P. You should not normally have to use this.
Default comparison of actual vs. expected value. The formula in this column is macro-generated in Excel (see below) and checked by the reference test runner.
Result of comparison of actual vs. expected value. The formula in this column is macro-generated in Excel (see below) and checked by the reference test runner. The reference test runner also checks that it does, in fact, have the saved value true. So is the formula and saved value of cell Q1.

Alternate Inputs

When you specify bound input cells using column J, the automated test runner automatically runs the formula with all possible combinations of bound and unbound input values. This ensures that the runtime and compile time (constant folder) implementations work for all combinations.

These tests do not, however, check whether a bound input really reacts dynamically to its runtime input value. The dynamic input value is, after all, still the same as the constant values in the spreadsheet. This is so because the result has to remain the same for checking.

To check alternate input sets, you use the value ... in the Name column K. This runs the test in the above row again, with the full set of inputs bound. The dynamic values are now taken from the alternate row instead of the original test row. So is the expected result. The formula in the alternate row, however, is ignored (because the original test row is run). While the formula is ignored by the test, you should still copy it down from the original test row to conveniently determine the expected result of the alternate row. The last test row for ABS() in the sheet above shows this. Finally, you can have more than one alternate row. Just add more rows with ... in the Name column.

Macro-generated Columns

There is an Excel macro


that can generate and refresh

  • the actual vs. expected columns P and Q,
  • the header row,
  • the conditional formatting that highlights actual vs. expected problems, and
  • the conditional formatting that highlights AFC vs. Excel differences (Excel says).

To install a macro for Microsoft Excel:

  • Create a new file Macros.xls (or any other name) in %APPDATA%\Microsoft\Excel\XLSTART. (On Windows XP, %APPDATA% is usually C:\Documents and Settings\%USERNAME%\Application Data.)
  • With the new file open, start the Visual Basic Editor (Tools → Macro → Visual Basic Editor).
  • Import the macro source file (File → Import File), for example refTestSheets.bas.
  • Save the new .xls file you created.

This file will now be opened automatically whenever you run Excel and make the macros available. To run such a macro:

  • Open a workbook with reference test data.
  • Run ForceAllFormatsAndColumns macro (Tools → Macro → Macros).

Running Tests

Every formula test sheet corresponds to an entry in a test class in


For the sheet NumericFunctions.xls, for example, we need to run the test class Basics, which looks like this:

public static Test suite() throws Exception
  return sheetSuite( "CellNames", "EmptyCells", "ErrorCells", "NumericOperators", "NumericComparisons",
      "NumericFunctions" );

Debug Test

Running these tests can take quite a while. To speed up debugging, you can use special test cases derived from org.formulacompiler.tests.reference.base.AbstractDebugSuiteSetup in your IDE. Make sure you have added the following source path for this (not included in release builds):


and also create your derived debug test cases there, in the org.formulacompiler.tests.reference package. Then they will be automatically ignored by Mercurial.

Here is an excerpt from this base test class. It should give you an idea of how to use it. It is also documented.

 * Returns a suite that runs a single sheet for the number type double and with no caching.
 * <p>
 * See here how to build such a test runs: {@.jcite -- sheetImpl}.
 * @param _fileName is the base name of the file without path or extension.
public static Test dbgSheetSuite( String _fileName ) throws Exception
  // -- sheetImpl
  return dbgSuiteBuilder( _fileName )
      // ... You could configure aspects here.
  // -- sheetImpl

 * Returns a suite that runs a single row's engine for the given number type and no caching.
 * <p>
 * See here how to build such a test runs: {@.jcite -- rowImpl}.
 * @param _fileName is the base name of the file without path or extension.
 * @param _rowNumber is the 1-based row number for which to compile and run an engine.
 * @param _numberType is the numeric type to use.
public static Test dbgRowSuite( String _fileName, int _rowNumber, BindingType _numberType ) throws Exception
  // -- rowImpl
  return dbgSuiteBuilder( _fileName )
      .row( _rowNumber )
      .numberType( _numberType )
      // ... You could go on configuring more aspects here.
  // -- rowImpl

 * Returns a suite that runs a single row's engine for all number types and caching variants.
 * <p>
 * See here how to customize such complex test runs: {@.jcite -- fullRowImpl}.
 * @param _fileName is the base name of the file without path or extension.
 * @param _rowNumber is the 1-based row number for which to compile and run an engine.
 * @see #dbgSuite(AbstractSetup)
 * @see BuilderSetup
public static Test dbgFullRowSuite( String _fileName, final int _rowNumber ) throws Exception
  // -- fullRowImpl
  return dbgSuite( new AllNumberTypesSetup( new AllCachingVariantsSetup( new BuilderSetup( _fileName )
    protected void configure( SheetSuiteBuilder _builder )
      _builder.row( _rowNumber );
      // ... You could go on configuring more aspects here.
  } ) ) );
  // -- fullRowImpl

Checking Compatibility with

AFC’s reference tests check compatibility with as follows:

  • For every test row in an Excel reference test sheet (MySheet.xls),
  • read the corresponding row from the parallel test sheet (MySheet.ods), and
  • ensure that the tests described by the two rows are equivalent.
  • Then compile and run an actual engine only from the row loaded from the .xls.

This means that when you add or modify reference tests in a .xls file, you have to similarly update the corresponding .ods file.

These tests ensure that the .ods parser is at least as capable as the .xls parser and returns the same internal model (and by consequence the same final engine). But since we do not yet compile engines loaded from .ods differently than when loaded from .xls, we don’t have to run the actual compilations twice. This will change when AFC properly handles differences between OpenOffice.orgs’s and Excel’s function implementations.

Change Tracking Using .yaml Files

The reference test sheets are a key component in AFC’s system tests. So it is important to know what is being changed in them. Since binary .xls and .ods files are hard to diff, the reference tests write the most relevant data from them out to text files during tests runs. These text files are then versioned along with the .xls and .ods files.

The text files end in .yaml because they are in YAML:-yaml format. But that is not the key point. More important is that they are tuned for easy diffing. In particular, they do not contain row or cell numbers, so that inserting a row high up does not result in a huge number of diffs due to changed row numbers further below.

When possible, we try to use just one .yaml file for both of the .xls and .ods files. This has the added bonus of telling us that the two really contain the same relevant data and formulas. When this is possible, there is just a single MySheet.yaml for both MySheet.xls and MySheet.ods. When it isn’t, then there is an additional MySheet.ods.yaml. (That there is no MySheet.xls.yaml is a convention, meaning the .xls is the master.)

During test runs, the YAML text is regenerated from the actual .xls and .ods files. If the text differs from the actual content of the corresponding .yaml file, a file called MySheet.xls-actual.yaml is written out using the new text, and the test fails. To make a test pass again after accepting the change, just delete the old .yaml and move the -actual.yaml in its place.

We chose not to simply overwrite the .yaml files with new content so people are forced to notice when sheets change, even if not running under version control (which would detect the change on the next hg stat). However, if you set test-ref-update-yaml: true in, then the .yaml files are updated in place and the tests do not fail because of this. The rule is that .xls files update .yaml, unless .xls.yaml exists; .ods files alway update .ods.yaml.