hello guys, hope all doing well
i would like to ask your help in how to train simple deep learning method to segment an image.
i have image, for example 10 image (PNG) for left ventricle of patient from MRI and i have the contours for that images (PNG) . the question is how i can to store the contours as ground truth for the images ? i follow the example below in matlab (traingle segmentation) but for my data i don't know how to store the image contours as ground truth for the training images. if you have any ideas don't hesitate to share with me. thanks a lot
I want to train this network to segment an image corresponding to their contours as shown below:
training image contour for the training image
matlab example:
training image training labeles
dataSetDir = fullfile(toolboxdir('vision'),'visiondata','triangleImages');imageDir = fullfile(dataSetDir,'trainingImages');labelDir = fullfile(dataSetDir,'trainingLabels');imds = imageDatastore(imageDir); classNames = ["triangle","background"];labelIDs = [255 0];pxds = pixelLabelDatastore(labelDir,classNames,labelIDs);imageSize = [32 32];numClasses = 2;lgraph = unetLayers(imageSize, numClasses);ds = pixelLabelImageDatastore(imds,pxds);options = trainingOptions('sgdm', ... 'InitialLearnRate',1e-3, ... 'MaxEpochs',50, ... 'VerboseFrequency',10);net = trainNetwork(ds,lgraph,options);testImage = imread('triangleTest.jpg');figureimshow(testImage)C = semanticseg(testImage,net);B = labeloverlay(testImage,C);figureimshow(B)
Best Answer