Challenge 2 Trial Event 2 Results.xlsx
URL: https://data.openei.org/files/6197/Challenge_2_Trial_Event_2_Results.xlsx
This spreadsheet contains 12 tabs. The Evaluation Results are in the data tab, the leaderboard results are in the “Msgain tables” tab, the list of participating teams is in the teams tab and information about the datasets is in the “dataset list 68” tab.
The meaning of the first 4 columns of the data tab should be obvious: team (name), (network) model, scenario (number), and division (1-4).
The next 3 are the scoring elements. The scenario score is the maximum of zpp and the objective, which is the Market Surplus obtained from the two solution files. If it is blank, Evaluation was unable to obtain a value for one of several possible reasons: one or both solution files were missing, incomplete, or defective. The team score over a given set S of scenarios is the sum of the scenario scores.
The last_scored_time is the timestamp from when the Evaluation was completed.
The infeasible column will contain “1” if Evaluation found the solution to be infeasible, which also happens if any switching is done in divisions 1 or 2. If no solution1 file is found the cell will be blank.
The sol1_created column will contain “1” if a BASECASE.txt file (also known as solution1 or sol1) is found; the sol2_created column will contain “1” if the appropriate number of “solution_label.txt” files (also known as solution2 or sol2) are found. Both columns need to be 1 and infeasible must be 0 to calculate the objective value. It may be 1 but the files may be invalid, which is probably the case if sol1_created is 1 but srun2_state is blank. This is not the case when the number of 0 entries is the same as the number of blank srun2_states, which are highlighted when it occurs.
The code1_runtime, code2_runtime, and eval_runtime columns give the number of seconds used for each purpose. The code1 time limit is either 300 seconds (divisions 1 and 3) or 3600 seconds (division 2 and 4). The code2 time limit is given in the code2_timelimit column (AD). The Eval runtime is for platform diagnostics.
The srun1_state and srun2_state columns give the completion status for codes 1 and 2. Exceeding the time limit may result in a CANCELLED status, which is not bad if the appropriate valid solution files were created. The same is true for FAILED status. The appropriate codex_err message may be helpful. CANCELLED+ indicates additional information was truncated; check the codex_err column. The srun2_state will be blank if no solution1.txt was generated.
The code1_err and code2_err columns provide additional error message information. One of the most common messages is that the run was cancelled due to time limit, which should also be indicated in the code1 or code2 timedout columns.
The code1_done (column BB) and code2_done (column BC) columns with a 1 indicate the code was run. Code1 executes in every case, but code2 does not execute if there is a problem with the solution files.
The contingency_count column (BE) gives the number of contingencies for a scenario, the sec_per_contingency column (BF) tells how long it took to evaluate each contingency, and the code2_timelimit column (BG) uses the contingency_count to compute the time limit for code2 for a scenario.
The code1_exitcode (column BJ) and code2_exitcode (BK) columns show the exit code returned by the given code. These can depend on the language being used. A code of 0 indicates a normal exit, 1 is an unspecified failure, 126 indicates the command was found but could not be executed (likely because the run had timed out), 127 indicates the run timed out, 134 indicated the run was aborted, 135 is a bus error, 137 is a forced termination, and 139 is a segmentation fault (address out of bounds, typically due to an invalid index value.
The url column (BR) holds the address where the results tar file may be downloaded from.
Data from the Evaluation program starts in column BU and may be understood from the comments in the code that is available at https://github.com/GOCompetition/C2DataUtilities. Reruns (column LD) tells how many times the solution/Evaluation package was run for this scenario. Trial Event 2 required a total of 5593 runs, 13.5% more than the minimum.
Source: ARPA-E Grid Optimization (GO) Competition Challenge 2
About this Resource
| Last updated | unknown |
|---|---|
| Created | unknown |
| Name | Challenge 2 Trial Event 2 Results.xlsx |
| Format | MS Excel File |
| License | Creative Commons Attribution |
| Created | 1 year ago |
| Media type | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet |
| has views | False |
| id | c8676230-8346-4b14-8cfb-17bcffbfd68c |
| metadata modified | 1 year ago |
| package id | 49255019-1ea1-4dc4-ae86-1a34f5d631cf |
| position | 20 |
| state | active |
| tracking summary | {'total': 0, 'recent': 0} |