Skip to content

Error locating target 'sheeprl.envs.dmc.DMCWrapper' #331

@LucaZanatta

Description

@LucaZanatta

Hi all! Nice project!

When I run the following:
python sheeprl exp=dreamer_v3_dmc_walker_walk.yaml

I got the following error:
`
CONFIG
├── algo
│ └── name: dreamer_v3
│ total_steps: 500000
│ per_rank_batch_size: 16
│ run_test: true
│ cnn_keys:
│ encoder:
│ - rgb
│ decoder:
│ - rgb
│ mlp_keys:
│ encoder: []
│ decoder: []
│ world_model:
│ optimizer:
target: torch.optim.Adam
│ lr: 0.0001
│ eps: 1.0e-08
│ weight_decay: 0
│ betas:
│ - 0.9
│ - 0.999
│ discrete_size: 32
│ stochastic_size: 32
│ kl_dynamic: 0.5
│ kl_representation: 0.1
│ kl_free_nats: 1.0
│ kl_regularizer: 1.0
│ continue_scale_factor: 1.0
│ clip_gradients: 1000.0
│ decoupled_rssm: false
│ learnable_initial_recurrent_state: true
│ encoder:
│ cnn_channels_multiplier: 32
│ cnn_act: torch.nn.SiLU
│ dense_act: torch.nn.SiLU
│ mlp_layers: 2
│ cnn_layer_norm:
│ cls: sheeprl.models.models.LayerNormChannelLast
│ kw:
│ eps: 0.001
│ mlp_layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ recurrent_model:
│ recurrent_state_size: 512
│ layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ transition_model:
│ hidden_size: 512
│ dense_act: torch.nn.SiLU
│ layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ representation_model:
│ hidden_size: 512
│ dense_act: torch.nn.SiLU
│ layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ observation_model:
│ cnn_channels_multiplier: 32
│ cnn_act: torch.nn.SiLU
│ dense_act: torch.nn.SiLU
│ mlp_layers: 2
│ cnn_layer_norm:
│ cls: sheeprl.models.models.LayerNormChannelLast
│ kw:
│ eps: 0.001
│ mlp_layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ reward_model:
│ dense_act: torch.nn.SiLU
│ mlp_layers: 2
│ layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ bins: 255
│ discount_model:
│ learnable: true
│ dense_act: torch.nn.SiLU
│ mlp_layers: 2
│ layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ actor:
│ optimizer:
target: torch.optim.Adam
│ lr: 8.0e-05
│ eps: 1.0e-05
│ weight_decay: 0
│ betas:
│ - 0.9
│ - 0.999
│ cls: sheeprl.algos.dreamer_v3.agent.Actor
│ ent_coef: 0.0003
│ min_std: 0.1
│ max_std: 1.0
│ init_std: 2.0
│ dense_act: torch.nn.SiLU
│ mlp_layers: 2
│ layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ clip_gradients: 100.0
│ unimix: 0.01
│ action_clip: 1.0
│ moments:
│ decay: 0.99
│ max: 1.0
│ percentile:
│ low: 0.05
│ high: 0.95
│ critic:
│ optimizer:
target: torch.optim.Adam
│ lr: 8.0e-05
│ eps: 1.0e-05
│ weight_decay: 0
│ betas:
│ - 0.9
│ - 0.999
│ dense_act: torch.nn.SiLU
│ mlp_layers: 2
│ layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ per_rank_target_network_update_freq: 1
│ tau: 0.02
│ bins: 255
│ clip_gradients: 100.0
│ gamma: 0.996996996996997
│ lmbda: 0.95
│ horizon: 15
│ replay_ratio: 0.5
│ learning_starts: 1300
│ per_rank_pretrain_steps: 0
│ per_rank_sequence_length: 64
│ cnn_layer_norm:
│ cls: sheeprl.models.models.LayerNormChannelLast
│ kw:
│ eps: 0.001
│ mlp_layer_norm:
│ cls: sheeprl.models.models.LayerNorm
│ kw:
│ eps: 0.001
│ dense_units: 512
│ mlp_layers: 2
│ dense_act: torch.nn.SiLU
│ cnn_act: torch.nn.SiLU
│ unimix: 0.01
│ hafner_initialization: true
│ player:
│ discrete_size: 32

├── buffer
│ └── size: 500000
│ memmap: true
│ validate_args: false
│ from_numpy: false
│ checkpoint: true

├── checkpoint
│ └── every: 10000
│ resume_from: null
│ save_last: true
│ keep_last: 5

├── env
│ └── id: walker_walk
│ num_envs: 4
│ frame_stack: 1
│ sync_env: true
│ screen_size: 64
│ action_repeat: 2
│ grayscale: false
│ clip_rewards: false
│ capture_video: true
│ frame_stack_dilation: 1
│ actions_as_observation:
│ num_stack: -1
│ noop: You MUST define the NOOP
│ dilation: 1
│ max_episode_steps: -1
│ reward_as_observation: false
│ wrapper:
target: sheeprl.envs.dmc.DMCWrapper
│ domain_name: walker
│ task_name: walk
│ width: 64
│ height: 64
│ seed: null
│ from_pixels: true
│ from_vectors: false

├── fabric
│ └── target: lightning.fabric.Fabric
│ devices: 1
│ num_nodes: 1
│ strategy: auto
│ accelerator: cuda
│ precision: bf16-mixed
│ callbacks:
│ - target: sheeprl.utils.callback.CheckpointCallback
│ keep_last: 5

└── metric
└── log_every: 5000
disable_timer: false
log_level: 1
sync_on_compute: false
aggregator:
target: sheeprl.utils.metric.MetricAggregator
raise_on_missing: false
metrics:
Rewards/rew_avg:
target: torchmetrics.MeanMetric
sync_on_compute: false
Game/ep_len_avg:
target: torchmetrics.MeanMetric
sync_on_compute: false
Loss/world_model_loss:
target: torchmetrics.MeanMetric
sync_on_compute: false
Loss/value_loss:
target: torchmetrics.MeanMetric
sync_on_compute: false
Loss/policy_loss:
target: torchmetrics.MeanMetric
sync_on_compute: false
Loss/observation_loss:
target: torchmetrics.MeanMetric
sync_on_compute: false
Loss/reward_loss:
target: torchmetrics.MeanMetric
sync_on_compute: false
Loss/state_loss:
target: torchmetrics.MeanMetric
sync_on_compute: false
Loss/continue_loss:
target: torchmetrics.MeanMetric
sync_on_compute: false
State/kl:
target: torchmetrics.MeanMetric
sync_on_compute: false
State/post_entropy:
target: torchmetrics.MeanMetric
sync_on_compute: false
State/prior_entropy:
target: torchmetrics.MeanMetric
sync_on_compute: false
Grads/world_model:
target: torchmetrics.MeanMetric
sync_on_compute: false
Grads/actor:
target: torchmetrics.MeanMetric
sync_on_compute: false
Grads/critic:
target: torchmetrics.MeanMetric
sync_on_compute: false
logger:
target: lightning.fabric.loggers.TensorBoardLogger
name: 2025-02-27_13-59-20_dreamer_v3_walker_walk_5
root_dir: logs/runs/dreamer_v3/walker_walk
version: null
default_hp_metric: true
prefix: ''
sub_dir: null

Using bfloat16 Automatic Mixed Precision (AMP)
Seed set to 5
Log dir: logs/runs/dreamer_v3/walker_walk/2025-02-27_13-59-20_dreamer_v3_walker_walk_5/version_0
Error executing job with overrides: ['exp=dreamer_v3_dmc_walker_walk.yaml', 'env.sync_env=True']
Error locating target 'sheeprl.envs.dmc.DMCWrapper', set env var HYDRA_FULL_ERROR=1 to see chained exception.`

Am I missing something?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions