Examples
Image Segmentation Training
The train_segmentation.py
script demonstrates how to train a UNet model for image segmentation tasks. This example showcases DeepLib’s capabilities for handling semantic segmentation problems with various customization options.
Prerequisites
PyTorch and torchvision installed
A dataset organized with the following structure:
data_root/ ├── images/ │ ├── train/ │ └── val/ ├── masks/ │ ├── train/ │ └── val/
Available Options
Option |
Description |
Default |
---|---|---|
|
Root directory containing the dataset |
Required |
|
Directory name containing images |
“images” |
|
Directory name containing masks |
“masks” |
|
Number of segmentation classes |
Required |
|
Number of training epochs |
50 |
|
Batch size for training |
64 |
|
Learning rate |
1e-4 |
|
Input image size |
192 |
|
Index to ignore in loss calculation |
255 |
|
Dropout probability |
0.1 |
|
Device to use (cuda, mps, or cpu) |
Best available |
|
Metric to monitor for early stopping |
“iou” |
|
Loss function |
“dice” |
|
Logger to use |
“tensorboard” |
Loss Functions
The example supports multiple loss functions:
ce
: Cross Entropy Lossdice
: Dice Losswce
: Weighted Cross Entropy Lossjaccard
: IoU Lossfocal
: Focal Loss
Logging Options
Choose from multiple logging backends:
tensorboard
: TensorBoard logging (default)mlflow
: MLflow loggingwandb
: Weights & Biases loggingnone
: No logging
Example Usage
Basic usage:
python examples/train_segmentation.py --data_root ./data/segmentation --num_classes 3
Advanced usage with custom parameters:
python examples/train_segmentation.py \
--data_root ./data/segmentation \
--num_classes 3 \
--batch_size 32 \
--learning_rate 1e-3 \
--input_size 256 \
--loss focal \
--logger wandb
Features
Automatic device selection (CUDA, MPS, or CPU)
Multiple loss functions
Various logging backends
Learning rate scheduling
Data augmentation
Early stopping based on monitored metrics
Model checkpointing