tensorboardX visualization (pytorch)
1, Using tensorboardX
1. python installation method
pip install tensorboard pip install tensorflow pip install tensorboardX
2. Use SummaryWriter in tensorboardX
The following describes various data recording methods of the SummaryWriter instance in detail, and provides corresponding examples for reference.
(1) Three methods of instantiating SummaryWriter
from tensorboardX import SummaryWriter # Creates writer1 object. # The log will be saved in 'runs/exp' writer1 = SummaryWriter('runs/exp') # Creates writer2 object with auto generated file name # The log directory will be something like 'runs/Aug20-17-20-33' writer2 = SummaryWriter() # Creates writer3 object with auto generated file name, the comment will be appended to the filename. # The log directory will be something like 'runs/Aug20-17-20-33-resnet' writer3 = SummaryWriter(comment='resnet')
The above shows three methods to initialize SummaryWriter:
1. Provide a path that will be used to save the log. / 2. No parameters. By default, the runs / date time path will be used to save the log
3. Provide a comment parameter, which will use the runs / date time comment path to save the log
(2) Use various add methods to record data
Use add_scalar method to record numeric constants.
add_scalar(tag, scalar_value, global_step=None, walltime=None)
Parameter meaning:
tag (string): data name. Data with different names are displayed by different curves
scalar_value (float): numeric constant value
global_step (int, optional): step of training
walltime (float, optional): records the time of occurrence. The default is time time()
Note that scalar here_ Value must be of float type. If it is PyTorch scalar tensor, it needs to be called The item() method gets its value. We usually use add_scalar method is used to record the changes of loss, accuracy, learning rate and other values in the training process, and intuitively monitor the training process.
(3) View the visualization results in the browser (cmd starts tensorboardX)
Generally speaking, we create a SummaryWriter with different paths for each experiment, which is also called a run, such as runs/exp1 and runs/exp2. Next, we can call various add of SummaryWriter instance_ The something method writes different types of data to the log. To view and visualize these data in the browser, just open tensorboard on the command line: tensorboard --logdir=Path
The Path can be the Path of a single run, such as runs/exp generated by writer1 above; It can also be the parent directory of multiple runs. For example, there may be many subfolders under runs / and each folder represents an experiment. We can easily compare the data obtained from different experiments under runs / horizontally in the tensorboard visual interface by making tensorboard – logdir=runs /.
**
3. Middle note in pytorch -- save the model structure diagram (graph).
import torch import torch.nn as nn from tensorboardX import SummaryWriter class model(nn.Module): ......... input = torch.rand(13, 1, 28, 28) #Suppose you input 13 1 * 28 * 28 pictures model = model() with SummaryWriter(log_dir='logs', comment='Net') as w: w.add_graph(model, (input, ))
Then a logs file will appear under this code file. Then open the prompt file in Anconda, and enter it. The input format is:
tensorboard --logdir = 'file absolute path'
– logdir = "D: \ introduction to pytoch deep learning \ bilstm_attn master \ logs"
Copy the link that appears and open it in the browser:
4. Visual loss function
writer = SummaryWriter(log_dir='logs', comment='Linear') for epoch in range(1, epochs+1): epoch_start_time = time.time() loss = train() loss, corrects, acc, size = evaluate() writer.add_scalar("Training loss value", loss, epoch) writer.add_scalar("Classification accuracy", acc, epoch)
Enter the following in cmd and open it in the browser:
tensorboard --logdir="D:\Pytorch Introduction to deep learning\biLSTM_attn-master\logs"