Skip to content
Snippets Groups Projects
Commit 10f48ca8 authored by piotrmp's avatar piotrmp
Browse files

Documenting the GPU inference feature.

parent c76c3e52
No related branches found
No related tags found
1 merge request!2Multiword generation
......@@ -56,7 +56,9 @@ Alternatively, you can select a specific model by defining LAMBO variant (`LAMBO
lambo = Lambo.get('LAMBO-UD_Polish-PDB')
```
Finally, if a subword splitter is available, you can opt out of using it by providing `with_splitter=False` as an argument to the `get()` function.
There are two optional arguments to the `get()` function:
- You can opt out of using subword splitter by providing `with_splitter=False`.
- You can point to a specific pyTorch device by providing `device` parameter, for example `device=torch.device('cuda')` to enable GPU acceleration.
Once the model is ready, you can perform segmentation of a given text:
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment