External contest formats

There are two different sets of needs that external contest formats strive to satisfy.

  • The first is that of contest admins, that for several reasons (storage of old contests, backup, distribution of data) want to export the contest original data (tasks, contestants, ...) together with all data generated during the contest (from the contestants, submissions, user tests, ... and from the system, evaluations, scores, ...). Once a contest has been exported in this format, CMS must be able to reimport it in such a way that the new instance is indistinguishable from the original.
  • The second is that of contest creators, that want an environment that helps them design tasks, testcases, and insert the contest data (contestant names and so on). The format needs to be easy to write, understand and modify, and should provide tools to help developing and testing the tasks (automatic generation of testcases, testing of solutions, ...). CMS must be able to import it as a new contest, but also to import it over an already created contest (after updating some data).

CMS provides an exporter cmsContestExporter and an importer cmsContestImporter working with a format suitable for the first set of needs. This format comprises a dump of all serializable data regarding the contest in a JSON file, together with the files needed by the contest (testcases, statements, submissions, user tests, ...). The exporter and importer understand also compressed versions of this format (i.e., in a zip or tar file). For more information run

cmsContestExporter -h
cmsContestImporter -h

As for the second set of needs, the philosophy is that CMS should not force upon contest creators a particular environment to write contests and tasks. Therefore, we encourage you to write importer and reimporter scripts, modeled upon those we wrote for the environment used in the Italian Olympiads, that can be run with the commands cmsYamlImporter and cmsYamlReimporter and inspected at cmscontrib/YamlImporter.py and cmscontrib/YamlReimporter.py. If you want to use the Italian environment there is a description in the next section, but please be aware that it has severe limitations: for example, many handles are in Italian and the support for complex task types is a bit cumbersome.

Italian import format

You can follow this description looking at this example. A contest is represented in one directory, containing:

  • a YAML file named contest.yaml, that describes the general contest properties;
  • for each task task_name, a YAML file task_name.yaml that describes the task and a directory task_name that contains all the files needed to build the statement of the problem, the input and output cases, the reference solution and (when used) the solution checker.

The exact structure of these files and directories is detailed below. Note that providing confusing input to cmsYamlImporter can, unexpectedly, confuse it and create inconsistent tasks and/or strange errors. For confusing input we mean parameters and/or files from which it can infer no or multiple task types or score types.

General contest description

The contest.yaml file is a plain YAML file, with at least the following keys.

  • nome_breve (“short name”, string): the contest’s short name, used for internal reference (and exposed in the URLs); it has to match the name of the directory that serves as contest root.
  • nome (“name”, string): the contest’s name (description), shown to contestants in the web interface.
  • problemi (“tasks”, list of strings): a list of the tasks belonging to this contest; for each of these strings, say task_name, there must be a file named task_name.yaml in the contest directory and a directory called task_name, used to extract information about that task; the order in this list will be the order of the tasks in the web interface.
  • utenti (“users”, list of associative arrays): each of the elements of the list describes one user of the contest; the exact structure of the record is described below.

The following are optional keys.

  • inizio (“start”, integer): the UNIX timestamp of the beginning of the contest (copied in the start field); defaults to zero, meaning that contest times haven’t yet been decided.
  • fine (“stop”, integer): the UNIX timestamp of the end of the contest (copied in the stop field); defaults to zero, meaning that contest times haven’t yet been decided.
  • token_*: token parameters for the contest, see Tokens (the names of the parameters are the same as the internal names described there); by default tokens are disabled.
  • max_*_number and min_*_interval (integers): limitations for the whole contest, see Limitations (the names of the parameters are the same as the internal names described there); by default they’re all unset.

User description

Each contest user (contestant) is described in one element of the utenti key in the contest.yaml file. Each record has to contains the following keys.

  • username (string): obviously, the username.
  • password (string): obviusly as before, the user’s password.

The following are optional keys.

  • nome (“name”, string): the user real first name; defaults to the empty string.
  • cognome (“surname”, string): the user real last name; defaults to the value of username.
  • ip (string): the IP address from which incoming connections for this user are accepted, see User login; defaults to 0.0.0.0.
  • fake (string): when set to True (case-sensitive _string_) set the hidden flag in the user, see User login; defaults to False.

Task description

The task YAML files requires the following keys.

  • nome_breve (“short name”, string): the name used to reference internally to this task; it is exposed in the URLs.
  • nome (“name”, string): the long name (title) used in the web interface.
  • n_input (integer): number of test cases to be evaluated for this task; the actual test cases are retrieved from the task directory.

The following are optional keys.

  • timeout (float): the timeout limit for this task in seconds; defaults to no limitations.
  • memlimit (integer): the memory limit for this task in megabytes; defaults to no limitations.
  • risultati (“results”, string): a comma-separated list of test cases (identified by their numbers, starting from 0) that are marked as public, hence their results are available to contestants even without using tokens.
  • token_*: token parameters for the task, see Tokens (the names of the parameters are the same as the internal names described there); by default tokens are disabled.
  • max_*_number and min_*_interval (integers): limitations for the task, see Limitations (the names of the parameters are the same as the internal names described there); by default they’re all unset.
  • outputonly (boolean): if set to True, the task is created with the OutputOnly type; defaults to False.

The following are optional keys that must be present for some task type or score type.

  • total_value (float): for tasks using the Sum score type, this is the maximum score for the task and defaults to 100.0; for other score types, the maximum score is computed from the task directory.
  • infile and outfile (strings): for Batch tasks, these are the file names for the input and output files; default to input.txt and output.txt.

Task directory

The content of the task directory is used both to retrieve the task data and to infer the type of the task.

These are the required files.

  • testo/testo.pdf (“statement”): the main statement of the problem. It is not yet possible to import several statement associated to different languages.
  • input/input%d.txt and output/output%d.txt for all integers %d between 0 (included) and n_input (excluded): these are of course the input and (one of) the correct output files.

The following are optional files, that must be present for certain task types or score types.

  • gen/GEN: in the Italian environment, this file describes the parameters for the input generator: each line not composed entirely by white spaces or comments (comments start with # and end with the end of the line) represents an input file. Here, it is used, in case it contains specially formatted comments, to signal that the score type is GroupMin. If a line contains only a comment of the form # ST: score then it marks the beginning of a new group assigning at most score points, containing all subsequent testcases until the next special comment. If the file does not exists, or does not contain any special comments, the task is given the Sum score type.
  • sol/grader.%l (where %l here and after means a supported language extension): for tasks of type Batch, it is the piece of code that gets compiled together with the submitted solution, and usually takes care of reading the input and writing the output. If one grader is present, the graders for all supported languages must be provided.
  • sol/*.h and sol/*lib.pas: if a grader is present, all other files in the sol directory that end with .h or lib.pas are treated as auxiliary files needed by the compilation of the grader with the submitted solution.
  • cor/correttore (checker): for tasks of types Batch or OutputOnly, if this file is present, it must be the executable that examines the input and both the correct and the contestant’s output files and assigns the outcome. If the file is not present, a simple diff is used to compare the correct and the contestant’s output files.
  • cor/manager: for tasks of type Communication, this executable is the program that reads the input and communicates with the user solution.
  • sol/stub.%l: for tasks of type Communication, this is the piece of code that is compiled together with the user submitted code, and is usually used to manage the communication with manager. Again, all supported languages must be present.

Project Versions

Table Of Contents

Previous topic

Score types

Next topic

RankingWebServer

This Page