Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
C
combo
Manage
Activity
Members
Labels
Plan
Issues
20
Issue boards
Milestones
Wiki
Redmine
Code
Merge requests
2
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Syntactic Tools
combo
Commits
c0835180
Commit
c0835180
authored
4 years ago
by
Mateusz Klimaszewski
Browse files
Options
Downloads
Patches
Plain Diff
Fix training loops and metrics.
parent
b8c83784
Branches
Branches containing commit
Tags
Tags containing commit
2 merge requests
!15
Merge develop to master
,
!14
Install and docs
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
combo/training/checkpointer.py
+1
-0
1 addition, 0 deletions
combo/training/checkpointer.py
combo/training/trainer.py
+2
-2
2 additions, 2 deletions
combo/training/trainer.py
combo/utils/metrics.py
+3
-3
3 additions, 3 deletions
combo/utils/metrics.py
with
6 additions
and
5 deletions
combo/training/checkpointer.py
+
1
−
0
View file @
c0835180
...
...
@@ -16,6 +16,7 @@ class FinishingTrainingCheckpointer(training.Checkpointer):
epoch
:
Union
[
int
,
str
],
trainer
:
"
allen_trainer.Trainer
"
,
is_best_so_far
:
bool
=
False
,
save_model_only
:
bool
=
False
,
)
->
None
:
if
trainer
.
_learning_rate_scheduler
.
decreases
<=
1
or
epoch
==
trainer
.
_num_epochs
-
1
:
super
().
save_checkpoint
(
epoch
,
trainer
,
is_best_so_far
)
...
...
This diff is collapsed.
Click to expand it.
combo/training/trainer.py
+
2
−
2
View file @
c0835180
...
...
@@ -84,7 +84,7 @@ class GradientDescentTrainer(training.GradientDescentTrainer):
logger
.
info
(
"
Beginning training.
"
)
val_metrics
:
Dict
[
str
,
float
]
=
{}
this_epoch_val_metric
:
float
this_epoch_val_metric
:
float
=
None
metrics
:
Dict
[
str
,
Any
]
=
{}
epochs_trained
=
0
training_start_time
=
time
.
time
()
...
...
@@ -141,7 +141,7 @@ class GradientDescentTrainer(training.GradientDescentTrainer):
# Check validation metric for early stopping
this_epoch_val_metric
=
val_metrics
[
self
.
_validation_metric
]
self
.
_metric_tracker
.
add_metric
(
this_epoch_val_metric
)
#
self._metric_tracker.add_metric(this_epoch_val_metric)
train_metrics
[
"
patience
"
]
=
self
.
_metric_tracker
.
_patience
if
self
.
_metric_tracker
.
should_stop_early
():
...
...
This diff is collapsed.
Click to expand it.
combo/utils/metrics.py
+
3
−
3
View file @
c0835180
...
...
@@ -241,10 +241,10 @@ class SemanticMetrics(metrics.Metric):
self
.
feats_score
.
correct_indices
*
self
.
lemma_score
.
correct_indices
*
self
.
attachment_scores
.
correct_indices
*
enhanced_indices
)
enhanced_indices
)
*
mask
.
flatten
()
total
,
correct_indices
=
self
.
detach_tensors
(
total
,
correct_indices
)
self
.
em_score
=
(
correct_indices
.
float
().
sum
()
/
total
).
item
()
total
,
correct_indices
=
self
.
detach_tensors
(
total
,
correct_indices
.
float
().
sum
()
)
self
.
em_score
=
(
correct_indices
/
total
).
item
()
def
get_metric
(
self
,
reset
:
bool
)
->
Dict
[
str
,
float
]:
metrics_dict
=
{
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment