ACATS 4.1 User's Guide
Title Page
Contents   Index   References   Previous   Next 

6.1.1 Workflow using the Grading Tool

The workflow using the Grading Tool is similar to using the ACATS manually. The steps needed are outlined below.
Install and configure the ACATS in the normal way, as outlined in clauses 5.1, 5.2, and 5.3. It is particular important that Macro Defs customization 5.2.2 is accomplished before generating any test summaries, as the summary program is unaware of the macro syntax. Also, do not use the grading tool on the support tests in the CZ directory, as some of these include intentional failure messages that the grading tool is not prepared to handle.
Compile the Grading and Test Summary Tools, as described in 6.1.3.
Determine how Event Traces are going to be constructed. If the implementation provides direct writing of an event trace (as described in 6.2.2), then go to step 4a. Otherwise, acquire or create a listing convertion tool as described in 6.2.3, and go to step 4b.
Create command scripts (as described in 5.4) to process the ACATS. Include in those the appropriate option to create event traces. Also, modify Report.A so that the constant Generate_Event_Trace_File has the value True. When complete, go to step 5.
The command scripts could generate one giant event trace for the entire ACATS, but it probably is more manageable to create several smaller event traces for portions of the ACATS. One obvious way to do that is to create a single event trace for each subdirectory that contains ACATS tests in its default delivery structure. (Such event traces can be combined later, if desired.)
Create command scripts (as described in 5.4) to process the ACATS. Include in those use of the listing conversion tool to make event traces. Then go to step 5.
Create test summaries for each grading segment. If, for instance, you will grade each directory individually, then you will need a test summary file for each directory. These files can be generated by running the Test Summary tool on each source file in the directory using a single summary file as output. On most operating systems, this is easily accomplished with a script command. (Some possibilities are discussed in 6.1.5).
The Test Summary Files will only need to be regenerated if the ACATS tests change in some way (typically, when an ACATS Modification List is issued). It's probably easiest to make a script to regenerate the entire set of summaries so that it can be used when the suite changes. Once the entire set of test summaries has been created, move to step 6.
Create an empty manual grading request file. This is just an empty text file. (See 6.4 for more information.)
Process the ACATS tests, creating event traces. The event traces should contain the same tests as the test summary files. This process is described in 5.5.
Run the Grading Tool (GRADE) on the pairs of event traces and summaries, using the current manual grading request file. Typically, the default options are sufficient, but some implementations or event traces may need options. The options are described in 6.1.4.
If all of the grading reports display Passed, you're done. But most likely, some tests will be reported as failed. The grading tool will report the first failure reason for each test, but there may be additional failure reasons for each test.
If the failure reason is a process failure or a missing compilation, most likely there is a problem with the scripts that process the ACATS. Make sure that the test files are compiled in the appropriate order and no test files are missing. A missing compilation might also mean that the test needs to be split. See item B.
If the failure reason is extra or missing errors, grade the test manually (see 5.6) to see if the problem is with the implementation or the Grading Tool being too strict about error locations. If manual grading indicates the test passed, add the test to your Manual Grading Request file - again, see 6.4 (preferably with a comment explaining why it was added). Note that it is not necessary to remove tests from this list: if the grading tool determines that the test grades as Passed or Not Applicable, it will not request manual grading for the test even if it appears in this list.
If the manual grading indicates that the test needs to be split, do the following. First, add the test to your Manual Grading Request file - the ACATS requires processing the original test in this case. (Be sure to put in a comment that the test is split, since it won't be necessary to manually grade the original test in that case.) Then, split the test following the guidelines in 5.2.5, and add the split tests to a processing script and the test summary script. The Test Summary tool can create summaries for split tests, so the grading tool can be used to grade them.
For other failure reasons, most likely the implementation is at fault. Fixing the implementation is likely the only way to meaningfully change the result. In the unlikely event that there is a problem with a test, the procedure for test challenges are outlined in 5.7.2.
You'll also need to handle any special handling tests, including any tests that require manual grading. (This is one good reason to keep the manual grading list as short as possible.)
Then return to step 7 and repeat the test run. 
Using this procedure, the vast majority of tests will not require hand grading. Future ACATS updates may improve tests that are particulary difficult to grade automatically. The ACAA is interested in which tests need manual grading for your implementation - see 6.4.

Contents   Index   References   Previous   Next