Easy workflow for Python dependency management

A simple way to figure out your requirements.txt

Whenever you’re working with Python, you will need to manage your dependencies as you install and use multiple libraries. Poor dependency management is notorious for causing conflicts in production and causing the typical “It worked on my laptop” issue. Python environments differ from one environment to another. The python environment on your local computer might be very different from your colleagues’, or from whatever production environment you’re running.

To make sure that you don’t run into dependency issues and that your code stays reproducible you will need to manage your dependencies. Dependencies are often tracked in the form of a requirements.txt or an environment.yaml (if you’re working with conda). Most deployment and serving tools will require something like a requirements.txt to know what type of environment to create for your code.

Personally, I work at/with UbiOps and there you need a requirements.txt. I often get the question of how to actually determine the requirements.txt you need. If you’ve just opened your editor of choice and got to work, it is quite a hassle to figure out what your dependencies exactly are and what to include! You can of course run `pip freeze > requirements.txt` but this requirements.txt will include any Python library you ever installed on your computer, not just the ones your project needs. To avoid the dependency hassle I adhere to a very simple workflow that allows me to extract a requirements.txt quickly whenever I need to push to UbiOps. In this article, I will walk you through my process. Hopefully, it will help make dependency management easier!


How I do it

I have a very simple workflow I follow to make sure I can generate a proper requirements.txt file for my UbiOps deployment within no time. I personally use Visual Studio Code as my IDE of choice, but the steps I take should work regardless of your IDE.


Creating a Virtual environment

I start any project with a new empty folder. I open up this folder in Visual Studio Code and I start up a new terminal in that folder. The first thing I do in my clean workspace is to start up a fresh virtual environment. Python offers built-in functionality for virtual environments making them super easy to set up. If you’re a Windows user (like me) you can use:

python -m venv .venv

This will start up a new virtual environment in your current working directory with the name ‘.venv’. If you’re a Mac or Linux user you will need:

python3 -m venv .venv

You might need to run sudo-apt install python3-venv first though when using Ubuntu/Debian.


Activating the virtual environment

Once your virtual environment is created you will see a new folder called ‘.venv’ in your working directory. In this folder you can find an activation script. This script needs to be run to activate the virtual environment. You can do so by running:


Once your virtual environment is activated you will see ‘venv’ appear in front of every line in your terminal.



Using the virtual environment

Now that you have your virtual environment set up and ready, it is time to use it! Basically you can continue your regular model/code development workflow from here. Start up a new python file and write some code, installing the necessary libraries as you go. Every library you install will be installed in your virtual environment instead of in your actual computer.


Extracting your requirements.txt file

Once you’re happy with your code and you need to package it, you can easily create your requirements.txt file using pip. With a terminal at the path of your workspace you can run:

pip freeze > requirements.txt

This command will add all the currently installed packages to a new file called ‘requirements.txt’! If you include this file in your deployment package you can be certain your code will have all the Python dependencies it needs.


An important side note here is that any library you installed but didn’t actually need will also be in the requirements.txt with this workflow. If you added a package but decided you’re not going to use it, it is best to uninstall that package before freezing your requirements. To do so you just need to run pip uninstall <packagename>. This will make sure that there are no unnecessary dependencies in your requirements.txt.


All done!

That’s it! Those are the steps I follow to make sure my workspace stays clean and that I have a requirements.txt ready to go. However, it is still possible to run into dependency conflicts since pip isn’t great at picking up dependency conflicts. To really dive into dependency hell and how to solve additional issues that might arise is too much for this article. Instead, I can highly recommend “The nine circles of dependency hell” by Knewton, which goes into detail about solving dependency conflicts when they do arise.

If you have tried this workflow or if you have tips that can make it better, please let me know!

Anouk Dutrée


Latest news

Turn your AI & ML models into powerful services with UbiOps