GOOD.utils.args

An important module that is used to define all arguments for both argument container and configuration container.

Functions

args_parser([argv])

Arguments parser.

GOOD.utils.args.args_parser(argv: Optional[list] = None)[source]

Arguments parser.

Parameters

argv – Input arguments. e.g., [’–config_path’, config_path, ‘–ckpt_root’, os.path.join(STORAGE_DIR, ‘reproduce_ckpts’), ‘–exp_round’, ‘1’]

Returns

General arguments

Classes

AutoArgs(*args[, underscores_to_dashes, ...])

CommonArgs(argv)

Correspond to general configs in config files.

DatasetArgs(*args[, underscores_to_dashes, ...])

Correspond to dataset configs in config files.

ModelArgs(*args[, underscores_to_dashes, ...])

Correspond to model configs in config files.

OODArgs(*args[, underscores_to_dashes, ...])

Correspond to ood configs in config files.

TrainArgs(*args[, underscores_to_dashes, ...])

Correspond to train configs in config files.

class GOOD.utils.args.AutoArgs(*args, underscores_to_dashes: bool = False, explicit_bool: bool = False, config_files: Optional[List[str]] = None, **kwargs)[source]

Bases: Tap

allow_algs: List[str] = None

Allowed OOD algorithms.

allow_datasets: List[str] = None

Allow datasets in list to run.

allow_devices: List[int] = None

Devices allowed to run.

allow_domains: List[str] = None

Allow domains in list to run.

allow_shifts: List[str] = None

Allow shifts.

config_root: str = None

The root of input configuration files.

final_root: str = None

The root of output final configuration files.

launcher: str = None

The launcher name.

sweep_root: str = None

The root of hyperparameter searching configurations.

class GOOD.utils.args.CommonArgs(argv)[source]

Bases: Tap

Correspond to general configs in config files.

ckpt_dir: str = None

The direct directory for saving ckpt files

ckpt_root: str = None

Checkpoint root for saving checkpoint files, where inner structure is automatically generated

clean_save: bool = None

Only save necessary checkpoints.

config_path: str = None

(Required) The path for the config file.

dataset: DatasetArgs = None

For code auto-complete

device = None

Automatically generated by choosing gpu_idx.

exp_round: int = None

Current experiment round.

gpu_idx: int = None

GPU index.

id_test_ckpt: str = None

Path of the model in-domain checkpoint

log_file: str = None

Log file name.

log_path: str = None

Log file path.

model: ModelArgs = None

For code auto-complete

num_workers: int = None

Number of workers used by data loaders.

ood: OODArgs = None

For code auto-complete

pipeline: str = None

Training/test controller.

process_args() None[source]

Perform additional argument processing and/or validation.

random_seed: int = None

Fixed random seed for reproducibility.

save_tag: str = None

Special save tag for distinguishing special training checkpoints.

task: typing_extensions.Literal[train, test] = None

‘train’ and ‘test’.

Type

Running mode. Allowed

tensorboard_logdir: str = None

Tensorboard logging place.

test_ckpt: str = None

Path of the model general test or out-of-domain test checkpoint

train: TrainArgs = None

For code auto-complete

class GOOD.utils.args.DatasetArgs(*args, underscores_to_dashes: bool = False, explicit_bool: bool = False, config_files: Optional[List[str]] = None, **kwargs)[source]

Bases: Tap

Correspond to dataset configs in config files.

dataloader_name: str = None

Name of the chosen dataloader.

dataset_name: str = None

Name of the chosen dataset.

dataset_root: str = None

Dataset storage root. Default STORAGE_ROOT/datasets

dataset_type: str = None

molecule, real-world, synthetic, etc. For special usages.

Type

Dataset type

dim_edge: int = None

Dimension of edge

dim_node: int = None

Dimension of node

domain: str = None

Domain selection.

generate: bool = None

The flag for generating GOOD datasets from scratch instead of downloading

num_classes: int = None

Number of labels for multi-label classifications.

num_envs: int = None

Number of environments in training set.

shift_type: typing_extensions.Literal[no_shift, covariate, concept] = None

The shift type of the chosen dataset.

class GOOD.utils.args.ModelArgs(*args, underscores_to_dashes: bool = False, explicit_bool: bool = False, config_files: Optional[List[str]] = None, **kwargs)[source]

Bases: Tap

Correspond to model configs in config files.

dim_ffn: int = None

Final linear layer dimension.

dim_hidden: int = None

Node hidden feature’s dimension.

dropout_rate: float = None

Dropout rate.

global_pool: str = None

‘max’, ‘mean’.

Type

Readout pooling layer type. Currently allowed

model_layer: int = None

Number of the GNN layer.

model_level: typing_extensions.Literal[node, link, graph] = 'graph'

What is the model use for? Node, link, or graph predictions.

model_name: str = None

Name of the chosen GNN.

class GOOD.utils.args.OODArgs(*args, underscores_to_dashes: bool = False, explicit_bool: bool = False, config_files: Optional[List[str]] = None, **kwargs)[source]

Bases: Tap

Correspond to ood configs in config files.

extra_param: List = None

OOD algorithms’ extra hyperparameter(s).

ood_alg: str = None

Name of the chosen OOD algorithm.

ood_param: float = None

OOD algorithms’ hyperparameter(s). Currently, most of algorithms use it as a float value.

process_args() None[source]

Perform additional argument processing and/or validation.

class GOOD.utils.args.TrainArgs(*args, underscores_to_dashes: bool = False, explicit_bool: bool = False, config_files: Optional[List[str]] = None, **kwargs)[source]

Bases: Tap

Correspond to train configs in config files.

alpha = None

A parameter for DANN.

ctn_epoch: int = None

Start epoch for continue training.

epoch: int = None

Current training epoch. This value should not be set manually.

lr: float = None

Learning rate.

max_epoch: int = None

Max epochs for training stop.

mile_stones: List[int] = None

0.1

Type

Milestones for a scheduler to decrease learning rate

num_steps: int = None

Number of steps in each epoch for node classifications.

save_gap: int = None

Hard checkpoint saving gap.

stage_stones: List[int] = None

The epoch for starting the next training stage.

test_bs: int = None

Batch size for test.

tr_ctn: bool = None

Flag for training continue.

train_bs: int = None

Batch size for training.

val_bs: int = None

Batch size for validation.

weight_decay: float = None

Weight decay.