You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 1, 2026. It is now read-only.
Copy file name to clipboardExpand all lines: packages/web/src/content/docs/providers.mdx
+39Lines changed: 39 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -444,6 +444,45 @@ To use Google Vertex AI with OpenCode:
444
444
445
445
4. Run the `/models` command to select the one you want.
446
446
447
+
---
448
+
449
+
### Hugging Face
450
+
451
+
[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) provides access to open models supported by 17+ providers.
452
+
453
+
1. Head over to [Hugging Face settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) to create a token with permission to make calls to Inference Providers.
454
+
455
+
2. Run `opencode auth login` and select **Hugging Face**.
456
+
457
+
```bash
458
+
$ opencode auth login
459
+
460
+
┌ Add credential
461
+
│
462
+
◆ Select provider
463
+
│ ● Hugging Face
464
+
│ ...
465
+
└
466
+
```
467
+
468
+
3. Enter your Hugging Face token.
469
+
470
+
```bash
471
+
$ opencode auth login
472
+
473
+
┌ Add credential
474
+
│
475
+
◇ Select provider
476
+
│ Hugging Face
477
+
│
478
+
◇ Enter your API key
479
+
│ _
480
+
└
481
+
```
482
+
483
+
4. Run the `/models` command to select a model like _Kimi-K2-Instruct_ or _GLM-4.6_.
0 commit comments