Setup 🤗
Getting setup for the Hugging Face ecosystem
The following steps are to help you get started with the Hugging Face ecosystem.
Best to follow the “Start here” steps and then go through the other setup steps as necessary.
Start here (universal steps)
- Create a free Hugging Face account at https://huggingface.co/join.
- Create a Hugging Face access token with read and write access at https://huggingface.co/settings/tokens.
- You can create a read/write token using the fine-grained settings and selecting all the appropriate options.
- Read more on Hugging Face access tokens at https://huggingface.co/docs/hub/en/security-tokens.
Note: Do not share your token with others. Always keep it private and avoid saving it in raw text format.
Getting setup on Google Colab
Note: If you’re unfamiliar with Google Colab, I’d recommend going through Sam Witteveen’s video Colab 101 and then Advanced Colab to learn more.
- Follow the steps in Start here.
- Add your Hugging Face read/write token as a Secret in Google Colab.
- Naming this Secret
HF_TOKENwill mean that Hugging Face libraries automatically recognize your token for future use.
- Naming this Secret
Alternatively, if you need to force relogin for a notebook session, you can run:
import huggingface_hub # requires !pip install huggingface_hub
# Login to Hugging Face
huggingface_hub.login()And enter your token in the box that appears (note: this token will only be active for the current notebook session and will delete when your Google Colab instance terminates).
Getting started locally
- Follow the steps in Start here.
- Follow your specific hardware steps below.
| Hardware | Package Manager | Backend | Setup Guide |
|---|---|---|---|
| NVIDIA GPU | Conda | CUDA | NVIDIA GPU + Conda |
| NVIDIA GPU | uv (pip) | CUDA | NVIDIA GPU + uv |
| macOS (Apple Silicon) | Conda | MPS | macOS + Conda |
| macOS (Apple Silicon) | uv (pip) | MPS | macOS + uv |
Global Hugging Face library requirements
Depending on your environment/local hardware, there are a handful of foundation libraries we’ll need to install from the Hugging Face ecosystem:
transformers- comes pre-installed on Google Colab but if you’re running on your local machine, you can install it viapip install transformers.datasets- a library for accessing and manipulating datasets on and off the Hugging Face Hub, you can install it viapip install datasets.evaluate- a library for evaluating machine learning model performance with various metrics, you can install it viapip install evaluate.accelerate- a library for training machine learning models faster, you can install it viapip install accelerate.gradio- a library for creating interactive demos of machine learning models, you can install it viapip install gradio.
NVIDIA GPU + Conda local setup
Install Miniconda to get the conda package manager.
Clone the course repository
git clone https://github.com/mrdbourke/learn-huggingfaceChange into the target directory:
cd learn-huggingfaceCreate and activate conda environment
Create environment:
conda create -n learn-hf python=3.12 -yNote: This setup has been test with Python 3.12. If you’d like, you can use a different/later version.
Activate it:
conda activate learn-hfInstall PyTorch
python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128Install Hugging Face CLI and Login
Install Hugging Face CLI:
python -m pip install -U "huggingface_hub[cli]"
Login with your Hugging Face account to authenticate your local machine:
hf auth loginInstall dependencies
Install dependenies we’ll need for the projects:
python -m pip install transformers datasets evaluate accelerate gradio trl matplotlib jupyterNote: If you run into any dependency issues during running the projects, you can always install them via pip install [DEPENDENCY_NAME].
Check that the imports work
python -c "
import torch, transformers, datasets, accelerate, gradio, trl, matplotlib, huggingface_hub
assert torch.cuda.is_available(), 'CUDA GPU not available'
print('torch', torch.__version__)
print('cuda_available', torch.cuda.is_available())
print('cuda_device_count', torch.cuda.device_count())
print('cuda_device', torch.cuda.get_device_name(0))
x = torch.tensor([1.0, 2.0]).to('cuda')
print('cuda_tensor_device', x.device)
print('transformers', transformers.__version__)
print('datasets', datasets.__version__)
print('accelerate', accelerate.__version__)
print('gradio', gradio.__version__)
print('trl', trl.__version__)
print('matplotlib', matplotlib.__version__)
print('huggingface_hub', huggingface_hub.__version__)
print('Conda env ready! Good to code!')
"If these work, we’re good to go!!
Get started
Option A: Jupyter Lab
jupyter labOption B: VS Code
Open the project in VS Code:
code .Note: This requires VS Code installed locally with the Jupyter extension so you can run .ipynb notebooks directly in VS Code.
Alternatively, you can also start writing Python scripts to follow along and learn.
NVIDIA GPU + uv (pip) local setup
Install uv to get a fast Python package manager.
Clone the course repository
git clone https://github.com/mrdbourke/learn-huggingfaceChange into the target directory:
cd learn-huggingfaceCreate and activate virtual environment
Create environment:
uv venv learn-hf --python 3.12Note: This setup has been tested with Python 3.12. If you’d like, you can use a different/later version.
Activate it:
source learn-hf/bin/activateInstall PyTorch
uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128Install Hugging Face CLI and Login
Install Hugging Face CLI:
uv pip install -U "huggingface_hub[cli]"Login with your Hugging Face account to authenticate your local machine:
huggingface-cli loginInstall dependencies
Install dependencies we’ll need for the projects:
uv pip install transformers datasets evaluate accelerate gradio trl matplotlib jupyterNote: If you run into any dependency issues during running the projects, you can always install them via uv pip install [DEPENDENCY_NAME].
Check that the imports work
python -c "
import torch, transformers, datasets, accelerate, gradio, trl, matplotlib, huggingface_hub
assert torch.cuda.is_available(), 'CUDA GPU not available'
print('torch', torch.__version__)
print('cuda_available', torch.cuda.is_available())
print('cuda_device_count', torch.cuda.device_count())
print('cuda_device', torch.cuda.get_device_name(0))
x = torch.tensor([1.0, 2.0]).to('cuda')
print('cuda_tensor_device', x.device)
print('transformers', transformers.__version__)
print('datasets', datasets.__version__)
print('accelerate', accelerate.__version__)
print('gradio', gradio.__version__)
print('trl', trl.__version__)
print('matplotlib', matplotlib.__version__)
print('huggingface_hub', huggingface_hub.__version__)
print('uv env ready! Good to code!')
"If these work, we’re good to go!!
Get started
Option A: Jupyter Lab
jupyter labOption B: VS Code
Open the project in VS Code:
code .Note: This requires VS Code installed locally with the Jupyter extension so you can run .ipynb notebooks directly in VS Code.
Alternatively, you can also start writing Python scripts to follow along and learn.
macOS + Conda local setup
Install Miniconda to get the conda package manager.
Note: macOS uses the MPS (Metal Performance Shaders) backend for GPU acceleration on Apple Silicon. Training on MPS is generally much slower than on NVIDIA GPUs with CUDA, however, inference works quite well. MPS is great for learning and experimentation but if you need faster training, consider using a cloud GPU (e.g. Google Colab) or an NVIDIA GPU machine.
Clone the course repository
Clone the learn-huggingface repo:
git clone https://github.com/mrdbourke/learn-huggingfaceChange into the target directory:
cd learn-huggingfaceCreate and activate conda environment
Create environment:
conda create -n learn-hf python=3.12 -yNote: This setup has been tested with Python 3.12. If you’d like, you can use a different/later version.
Activate it:
conda activate learn-hfInstall PyTorch
python -m pip install torch torchvisionNote: On macOS, the default PyTorch install includes MPS (Metal Performance Shaders) support. No special index URL is needed.
Install Hugging Face CLI and Login
Install Hugging Face CLI:
python -m pip install -U "huggingface_hub[cli]"Login with your Hugging Face account to authenticate your local machine:
hf auth loginInstall dependencies
Install dependencies we’ll need for the projects:
python -m pip install transformers datasets evaluate accelerate gradio trl matplotlib jupyterNote: If you run into any dependency issues during running the projects, you can always install them via pip install [DEPENDENCY_NAME].
Check that the imports work
python -c "
import torch, transformers, datasets, accelerate, gradio, trl, matplotlib, huggingface_hub
assert torch.backends.mps.is_available(), 'MPS not available'
print('torch', torch.__version__)
print('mps_available', torch.backends.mps.is_available())
print('mps_built', torch.backends.mps.is_built())
x = torch.tensor([1.0, 2.0]).to('mps')
print('mps_tensor_device', x.device)
print('transformers', transformers.__version__)
print('datasets', datasets.__version__)
print('accelerate', accelerate.__version__)
print('gradio', gradio.__version__)
print('trl', trl.__version__)
print('matplotlib', matplotlib.__version__)
print('huggingface_hub', huggingface_hub.__version__)
print('macOS Conda env ready! Good to code!')
"If these work, we’re good to go!!
Get started
Option A: Jupyter Lab
jupyter labOption B: VS Code
Open the project in VS Code:
code .Note: This requires VS Code installed locally with the Jupyter extension so you can run .ipynb notebooks directly in VS Code.
Alternatively, you can also start writing Python scripts to follow along and learn.
macOS + uv (pip) local setup
Install uv to get a fast Python package manager.
Note: macOS uses the MPS (Metal Performance Shaders) backend for GPU acceleration on Apple Silicon. Training on MPS is generally much slower than on NVIDIA GPUs with CUDA, however, inference works quite well. MPS is great for learning and experimentation but if you need faster training, consider using a cloud GPU (e.g. Google Colab) or an NVIDIA GPU machine.
Clone the course repository
Clone the learn-huggingface repo:
git clone https://github.com/mrdbourke/learn-huggingfaceChange into the target directory:
cd learn-huggingfaceCreate and activate virtual environment
Create environment:
uv venv learn-hf --python 3.12Note: This setup has been tested with Python 3.12. If you’d like, you can use a different/later version.
Activate it:
source learn-hf/bin/activateInstall PyTorch
uv pip install torch torchvisionNote: On macOS, the default PyTorch install includes MPS (Metal Performance Shaders) support. No special index URL is needed.
Install Hugging Face CLI and Login
Install Hugging Face CLI:
uv pip install -U "huggingface_hub[cli]"Login with your Hugging Face account to authenticate your local machine:
huggingface-cli loginInstall dependencies
Install dependencies we’ll need for the projects:
uv pip install transformers datasets evaluate accelerate gradio trl matplotlib jupyterNote: If you run into any dependency issues during running the projects, you can always install them via uv pip install [DEPENDENCY_NAME].
Check that the imports work
python -c "
import torch, transformers, datasets, accelerate, gradio, trl, matplotlib, huggingface_hub
assert torch.backends.mps.is_available(), 'MPS not available'
print('torch', torch.__version__)
print('mps_available', torch.backends.mps.is_available())
print('mps_built', torch.backends.mps.is_built())
x = torch.tensor([1.0, 2.0]).to('mps')
print('mps_tensor_device', x.device)
print('transformers', transformers.__version__)
print('datasets', datasets.__version__)
print('accelerate', accelerate.__version__)
print('gradio', gradio.__version__)
print('trl', trl.__version__)
print('matplotlib', matplotlib.__version__)
print('huggingface_hub', huggingface_hub.__version__)
print('macOS uv env ready! Good to code!')
"If these work, we’re good to go!!
Get started
Option A: Jupyter Lab
jupyter labOption B: VS Code
Open the project in VS Code:
code .Note: This requires VS Code installed locally with the Jupyter extension so you can run .ipynb notebooks directly in VS Code.
Alternatively, you can also start writing Python scripts to follow along and learn.