fed-tune#

fedsim-cli fed-tune#

Tunes a Federated Learning system.

fedsim-cli fed-tune [OPTIONS]

Options

--n-iters <n_iters>#

number of iterations to ask and tell the skopt optimizer

Default

10

--skopt-n-initial-points <skopt_n_initial_points>#

number of initial points for skopt optimizer

Default

10

--skopt-random-state <skopt_random_state>#

random state for skopt optimizer

Default

10

--skopt-base-estimator <skopt_base_estimator>#

skopt estimator

Default

GP

Options

GP | RF | ET | GBRT

--eval-metric <eval_metric>#

complete name of the metric (returned from train method of algorithm) to minimize (or maximize if --maximize is passed)

Default

server.avg.test.cross_entropy_score

--maximize, --minimize#

complete name of the metric (returned from train method of algorithm) to minimize or maximize

-r, --rounds <rounds>#

number of communication rounds.

Default

100

-d, --data-manager <data_manager>#

name of data manager.

Default

BasicDataManager

--train-split-name <train_split_name>#

name of local split to train train on

Default

train

-n, --n-clients <n_clients>#

number of clients.

Default

500

--client-sample-scheme <client_sample_scheme>#

client sampling scheme (uniform or sequential for now).

Default

uniform

-c, --client-sample-rate <client_sample_rate>#

mean portion of num clients to sample.

Default

0.01

-a, --algorithm <algorithm>#

federated learning algorithm.

Default

FedAvg

-m, --model <model>#

model architecture.

Default

SimpleMLP

-e, --epochs <epochs>#

number of local epochs.

Default

5

--criterion <criterion>#

loss function to use (defined under fedsim.losses).

Default

CrossEntropyScore, log_freq:50

--batch-size <batch_size>#

local batch size.

Default

32

--test-batch-size <test_batch_size>#

inference batch size.

Default

64

--optimizer <optimizer>#

server optimizer

Default

SGD, lr:1.0

--local-optimizer <local_optimizer>#

local optimizer

Default

SGD, lr:0.1, weight_decay:0.001

--lr-scheduler <lr_scheduler>#

lr scheduler for server optimizer

Default

StepLR, step_size:1, gamma:1.0

--local-lr-scheduler <local_lr_scheduler>#

lr scheduler for server optimizer

Default

StepLR, step_size:1, gamma:1.0

--r2r-local-lr-scheduler <r2r_local_lr_scheduler>#

lr scheduler for round to round local optimization

Default

StepLR, step_size:1, gamma:0.999

-s, --seed <seed>#

seed for random generators after data is partitioned.

--device <device>#

device to load model and data one

--log-dir <log_dir>#

directory to store the logs.

--n-point-summary <n_point_summary>#

number of last score report points to store and get the final average performance from.

Default

10

--local-score <local_score>#

hooks a score object to a split of local datasets. Choose the score classes from fedsim.scores. It is possible to call this option multiple times.

--global-score <global_score>#

hooks a score object to a split of global datasets. Choose the score classes from fedsim.scores. It is possible to call this option multiple times.


Options

Arguments