Examples¶
Here are some examples to help you get started with the easyllm library:
Hugging Face¶
| Example | Description |
|---|---|
| Detailed ChatCompletion Example | Shows how to use the ChatCompletion API to have a conversational chat with the model. |
| Detailed Completion Example | Uses the TextCompletion API to generate text with the model. |
| Create Embeddings | Embeds text into vector representations using the model. |
| Example how to stream chat requests | Demonstrates streaming multiple chat requests to efficiently chat with the model. |
| Example how to stream text requests | Shows how to stream multiple text completion requests. |
| Hugging Face Inference Endpoints Example | Example on how to use custom endpoints, e.g. Inference Endpoints or localhost. |
| Retrieval Augmented Generation using Llama 2 | Example on how to use Llama 2 70B for in-context retrival augmentation |
| Llama 2 70B Agent/Tool use example | Example on how to use Llama 2 70B to interace with tools and could be used as an agent |
The examples cover the main functionality of the library - chat, text completion, and embeddings. Let me know if you would like me to modify or expand the index page in any way.
Amazon SageMaker¶
| Example | Description |
|---|---|
| Detailed ChatCompletion Example | Shows how to use the ChatCompletion API to have a conversational chat with the model. |
| Detailed Completion Example | Uses the TextCompletion API to generate text with the model. |
| Create Embeddings | Embeds text into vector representations using the model. |
Amazon Bedrock¶
| Example | Description |
|---|---|
| Detailed ChatCompletion Example | Shows how to use the ChatCompletion API to have a conversational chat with the model. |
| Example how to stream chat requests | Demonstrates streaming multiple chat requests to efficiently chat with the model. |