diff --git a/docs/prediction.md b/docs/prediction.md
index 879f980518ca399d1e3e7e8a8f3db3125b426b17..25b7df175d10b3e2d5a48ef8aad693e2c2ed5a5c 100644
--- a/docs/prediction.md
+++ b/docs/prediction.md
@@ -29,6 +29,18 @@ tokenized_sentence = ["Sentence", "to", "parse", "."]
 sentence = nlp([tokenized_sentence])
 ```
 
+You can use COMBO with the [LAMBO](https://gitlab.clarin-pl.eu/syntactic-tools/lambo) tokeniser (Note: installing LAMBO is necessary, see [LAMBO installation](https://gitlab.clarin-pl.eu/syntactic-tools/lambo#installation) ).
+
+```python
+# Import COMBO and lambo
+from combo.predict import COMBO
+from combo.utils import lambo
+
+# Download models
+nlp = COMBO.from_pretrained("english-bert-base-ud29",tokenizer=lambo.LamboTokenizer("en"))
+sentences = nlp("This is the first sentence. This is the second sentence to parse.")
+```
+
 ## COMBO as a command-line interface 
 ### CoNLL-U file prediction:
 Input and output are both in `*.conllu` format.