"Failed to detect the name of this notebook, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable to enable code saving.\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mihaide\u001b[0m. Use \u001b[1m`wandb login --relogin`\u001b[0m to force relogin\n"
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to log in in a terminal so you don't have to repeat this step every time you run an experiment, you can run the following command and enter your API key after setting up your python environment:\n",
"\n",
"`wandb login`"
]
},
{
"data": {
"text/plain": [
"True"
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize a Run\n",
"\n",
"You can initialize a run in W&B, adding it to an existing project or creating a new one, as well as adding a config file to this run which tracks the hyperparameters of your experiment and will be stored in the database. You can do this by running the following cell:"
"If you want to log in in a terminal so you don't have to repeat this step every time you run an experiment, you can run the following command and enter your API key after setting up your python environment:\n",
"You can add everything you want to your config. It often makes sense to have at least the parameters in there that make this run unique, so you can repeat experiments later on with the exact same setup.\n",
"\n",
"`wandb login`"
"W&B initializes any run with a unique identifier, by default a random combination of an adjective and a noun. You can set the name yourself but be careful: If you use the same name twice, it will overwrite the older run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize a Run\n",
"## Logging"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now tie everything together and start monitoring our experiments. We will start with a simple example that mimicks the training of a machine learning model. \n",
"\n",
"You can initialize a run in W&B, adding it to an existing project or creating a new one, as well as adding a config file to this run which tracks the hyperparameters of your experiment and will be stored in the database. You can do this by running the following cell:"
"To actually see what is happening, go to [wandb.ai](wandb.ai) and click on the project \"intro_wandb\". We will introduce a `sleep` command into the loop, so we can see the live updates (otherwise it would finish too quickly).\n",
"\n",
"We will also put the initialization of the run into the same cell. This will make sure that we start a new run each time we want to re-execute the \"training\"."
"Additionally, W&B also tracks the system on which you run your training on. Go to your project workspace. You will see a dropdown menu called `System` below your charts with loss and accuracy. Here you can see variables like your Disk Utilization, Number of CPU Threads etc as well as the GPU variables if you train on GPU. This can be very helpful e.g. to see if you are utilizing the GPU enough or if you can improve on your training there."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## W&B in Use"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now take your model that you have trained in `intro_pytorch_model_optimization.ipynb` and implement tracking your training with W&B. You can log any variable that you want, it often makes sense to log training and testing loss and accuracy separately so you can monitor all charts at the same time. Try doing different trainings with different hyperparameters and store the changes in your config file."
"You can also log your weights and biases with W&B (what a surprise there). Figure out how this works and implement it for your FashionMNIST model as well."
]
}
],
...
...
@@ -104,7 +177,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
"version": "3.9.13"
},
"orig_nbformat": 4
},
...
...
%% Cell type:markdown id: tags:
# Experiment Tracking Tutorial: Weights and Biases
Weights and Biases (W&B, [wandb.ai](wandb.ai)) is a machine learning platform for developers to quickly track experiments, visualize results, reproduce models and a lot more. The general concept is to have a platform where you can live track your machine learning trainings and store your models/configurations without having to manage the database by yourself.
%% Cell type:markdown id: tags:
## Sign Up
You have to sign up first in order to use W&B. Go to [wandb.ai/site](wandb.ai/site) and create a **personal** account. Personal accounts can be used for as many experiments as you like and are free of charge.
Additionally, you have to install the W&B library as a python package. You can do this by running the following command:
`pip install wandb`
We have already installed it for you here.
%% Cell type:markdown id: tags:
## Login
To use W&B in your experiments, you need to be logged in on the machine you are running your experiments on. You can either do this in a jupyter notebook/python script by running the following cell and putting in your API key, which you can find [here](https://wandb.ai/authorize):
%% Cell type:code id: tags:
``` python
importwandb
wandb.login()
```
%% Output
Failed to detect the name of this notebook, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable to enable code saving.
[34m[1mwandb[0m: Currently logged in as: [33mihaide[0m. Use [1m`wandb login --relogin`[0m to force relogin
True
%% Cell type:markdown id: tags:
If you want to log in in a terminal so you don't have to repeat this step every time you run an experiment, you can run the following command and enter your API key after setting up your python environment:
`wandb login`
%% Cell type:markdown id: tags:
## Initialize a Run
You can initialize a run in W&B, adding it to an existing project or creating a new one, as well as adding a config file to this run which tracks the hyperparameters of your experiment and will be stored in the database. You can do this by running the following cell:
You can add everything you want to your config. It often makes sense to have at least the parameters in there that make this run unique, so you can repeat experiments later on with the exact same setup.
W&B initializes any run with a unique identifier, by default a random combination of an adjective and a noun. You can set the name yourself but be careful: If you use the same name twice, it will overwrite the older run.
%% Cell type:markdown id: tags:
## Logging
%% Cell type:markdown id: tags:
We can now tie everything together and start monitoring our experiments. We will start with a simple example that mimicks the training of a machine learning model.
To actually see what is happening, go to [wandb.ai](wandb.ai) and click on the project "intro_wandb". We will introduce a `sleep` command into the loop, so we can see the live updates (otherwise it would finish too quickly).
We will also put the initialization of the run into the same cell. This will make sure that we start a new run each time we want to re-execute the "training".
print(f"epoch {i}: accuracy = {acc}, loss = {loss}")
wandb.log({"accuracy":acc,"loss":loss})
time.sleep(2)
```
%% Cell type:markdown id: tags:
### System Monitoring
Additionally, W&B also tracks the system on which you run your training on. Go to your project workspace. You will see a dropdown menu called `System` below your charts with loss and accuracy. Here you can see variables like your Disk Utilization, Number of CPU Threads etc as well as the GPU variables if you train on GPU. This can be very helpful e.g. to see if you are utilizing the GPU enough or if you can improve on your training there.
%% Cell type:markdown id: tags:
## W&B in Use
%% Cell type:markdown id: tags:
Now take your model that you have trained in `intro_pytorch_model_optimization.ipynb` and implement tracking your training with W&B. You can log any variable that you want, it often makes sense to log training and testing loss and accuracy separately so you can monitor all charts at the same time. Try doing different trainings with different hyperparameters and store the changes in your config file.
%% Cell type:code id: tags:
``` python
# Your code goes here
```
%% Cell type:markdown id: tags:
run = wandb.init(project="intro_wandb", config={"learning_rate": 0.01})
You can also log your weights and biases with W&B (what a surprise there). Figure out how this works and implement it for your FashionMNIST model as well.