Skip to content
Snippets Groups Projects
Closed NKJP parameter tuning
  • View options
  • NKJP parameter tuning

    • View options
  • Closed created by Michał Marcińczuk

    Optymalizacja parametrów uczenia na zbiorze NKJP.

    Założenia:

    • Wyniki mają być logowane do wandb,
    • Uczenie na NKJP,
    • Test dla różnych wartości dropuout: od 0,1 do 0,8 z krokiem 0,1.
    wandb login e754aec91f2650f309c3db6e8b41da40cafa5ceb

    Snipper:

    # Init wandb
    import wandb
    wandb.init(project="poldeepner2-nkjp")
     Model instantiation code ...
    # Log metrics with wandb
    wandb.watch(model)
     Model train/evaluate code ...
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader)
        if batch_idx % args.log_interval == 0:
            wandb.log({"Test Accuracy": correct / total, "Test Loss": loss})
    
    # Save model to wandb
    torch.save(model.state_dict(), os.path.join(wandb.run.dir, 'model.pt'))
    Edited by Michał Marcińczuk

    Attributes

    Assignee

    None

    Labels

    None

    Milestone

    None

    Dates

    Start: None

    Due: None

    Time tracking

    No estimate or time spent

    Activity

    • All activity
    • Comments only
    • History only
    • Newest first
    • Oldest first