Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
C
combo
Manage
Activity
Members
Labels
Plan
Issues
20
Issue boards
Milestones
Wiki
Redmine
Code
Merge requests
2
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Syntactic Tools
combo
Merge requests
!4
Documentation
Code
Review changes
Check out branch
Download
Patches
Plain diff
Merged
Documentation
documentation
into
develop
Overview
0
Commits
12
Pipelines
0
Changes
17
Merged
Mateusz Klimaszewski
requested to merge
documentation
into
develop
4 years ago
Overview
0
Commits
12
Pipelines
0
Changes
1
Expand
0
0
Merge request reports
Viewing commit
ea870679
Prev
Next
Show latest version
1 file
+
2
−
1
Expand all files
Inline
Compare changes
Side-by-side
Inline
Show whitespace changes
Show one file at a time
ea870679
Add TODO note for next AllenNLP update.
· ea870679
Mateusz Klimaszewski
authored
4 years ago
combo/data/token_indexers/pretrained_transformer_mismatched_indexer.py
+
2
−
1
Options
@@ -6,10 +6,11 @@ from allennlp.data import token_indexers, tokenizers
@data.TokenIndexer.register
(
"
pretrained_transformer_mismatched_fixed
"
)
class
PretrainedTransformerMismatchedIndexer
(
token_indexers
.
PretrainedTransformerMismatchedIndexer
):
"""
TODO(mklimasz) Remove during next allennlp update, fixed on allennlp master.
"""
def
__init__
(
self
,
model_name
:
str
,
namespace
:
str
=
"
tags
"
,
max_length
:
int
=
None
,
tokenizer_kwargs
:
Optional
[
Dict
[
str
,
Any
]]
=
None
,
**
kwargs
)
->
None
:
# The matched version v.s. mismatched
ńskie
# The matched version v.s. mismatched
super
().
__init__
(
model_name
,
namespace
,
max_length
,
tokenizer_kwargs
,
**
kwargs
)
self
.
_matched_indexer
=
PretrainedTransformerIndexer
(
model_name
,