If you’re looking to build a robust environment for machine learning or GPU-accelerated computing on your Pop OS 22.xx system, this guide covers two critical steps: installing Miniconda and setting up a Conda-based CUDA environment. You’ll benefit from the simplicity of isolated Conda environments and the flexibility of installing CUDA libraries and tools without interfering with your system’s native configuration.
Part I: Installing Miniconda on Pop OS 22.xx
Miniconda is a lightweight, minimal installer for Conda that allows you to manage isolated environments and packages easily. We’ll use the quick command-line method as detailed in the official Miniconda documentation .
Step 1: Download the Miniconda Installer
Open your terminal and run:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
This command downloads the latest Miniconda installer for Linux directly from the official Anaconda repository.
Step 2: Run the Installer
Execute the installer script:
bash Miniconda3-latest-Linux-x86_64.sh
Follow the prompts:
- License Agreement: Press Enter to scroll or type q to jump to the end, then type yes to accept.
- Installation Directory: The default is
~/miniconda3
(you can change it if desired). - Initialization: When asked, type yes to allow the installer to initialize Conda (this updates your shell configuration, such as your
~/.bashrc
).
Step 3: Activate Miniconda
Reload your shell configuration so Conda becomes active:
source ~/.bashrc
Verify the installation by checking Conda’s version:
conda --version
You should see output similar to conda 4.x.x
.
Step 4: Update Conda (Optional but Recommended)
Update Conda to the latest version:
conda update -n base -c defaults conda
Follow the prompts to complete the update.
Part II: Setting Up a CUDA-Enabled Environment Using Conda
Once you have Miniconda installed, you can create an isolated environment with CUDA support. This method is ideal if you are focusing on machine learning projects because it minimizes dependency conflicts and simplifies version management.
Step 1: Create a New Conda Environment
Create a new environment (we’ll call it ml_gpu
) with your desired Python version (e.g., 3.9):
conda create -n ml_gpu python=3.9 -y
Activate the environment:
conda activate ml_gpu
Step 2: Install the CUDA Toolkit via Conda
There are two main options depending on your needs:
Option A: Runtime-Only Installation
For most machine learning frameworks (like TensorFlow or PyTorch), you only need the CUDA runtime libraries.
- Install cudatoolkit:
conda install -c nvidia cudatoolkit=12.2
(Adjust the version as needed. Your NVIDIA driver already supports CUDA 12.7, so a toolkit around 12.x is typically compatible.)
- (Optional) Install cuDNN:
conda install -c nvidia cudnn
Option B: Development Toolkit Installation (with NVCC)
If you plan to compile custom CUDA kernels or need the full suite (including the nvcc
compiler), install the development package:
conda install -c nvidia cudatoolkit-dev
Tip: Many machine learning applications do not require NVCC. If your work is primarily with prebuilt frameworks (e.g., PyTorch), the runtime installation is usually sufficient.
Step 3: Verify the CUDA Environment
After installing the CUDA toolkit, you can verify the setup in one of two ways:
- Framework Verification:
For example, if you use PyTorch:
python -c "import torch; print(torch.cuda.is_available())"
A return value of
True
indicates that CUDA is accessible within your Conda environment. - NVCC Verification (if using the development toolkit):
Check the CUDA compiler version:
nvcc --version
This should display the NVCC version details.
Step 4: Running Compute Tasks with PRIME Offloading (For Optimus Systems)
Since your Pop OS system likely uses NVIDIA Optimus technology (where the integrated GPU handles the display), you may need to offload compute tasks to the NVIDIA GPU explicitly. Within your Conda environment, use:
export __NV_PRIME_RENDER_OFFLOAD=1
export __GLX_VENDOR_LIBRARY_NAME=nvidia
Then run your compute application (e.g., a deep learning script):
python my_ml_script.py
This ensures that your compute workload is sent to the NVIDIA GPU while the display remains managed by the integrated GPU.
Final Recommendations and Verification
- Environment Isolation:
With Conda, you can easily create multiple environments for different projects without conflicts. - Driver Compatibility:
Ensure your system’s NVIDIA driver is up-to-date (yournvidia-smi
output should confirm this). - Reproducibility:
Export your Conda environment to a YAML file for sharing or future deployments:
conda env export > ml_gpu_env.yml
- Testing:
Run a small CUDA sample or your machine learning code to ensure everything works correctly. Monitor GPU utilization with:
watch -n 1 nvidia-smi
By following this guide, you now have:
- Miniconda installed on Pop OS 22.xx using the quick command-line method.
- A dedicated Conda environment (
ml_gpu
) with the CUDA runtime (and optionally the full development toolkit) installed. - Configuration for NVIDIA Optimus systems to offload compute tasks properly.
This setup streamlines your machine learning and GPU-accelerated computing workflow while maintaining a clean and reproducible development environment.
Happy computing and coding!
Feel free to update or customize this guide as your project requirements evolve. For further details, refer to the official Miniconda installation documentation and the NVIDIA Conda channels for CUDA packages.