Nguyen Lab Wiki

Dan Midgett, 8/13/13

In DVC (Digital Volume Correlation), Data is graphical, in the form of a set of 2D images (denoted as a stack from here) which can be reconstructed into a representative volume at a particular time of acquisition.

Guidelines for determining pattern quality:

  1. Image should contain randomly distributed (not regular) patterning. This can take the form of speckles of either brighter or darker quality than the rest of the image.
  2. Ideally, since strain results should be continuous, the pattern should contain background texture. We've found that though a completely white or black background with representative speckles can be analyzed, having some background texturing results in better correlation as it gives more matching information for the representative 3D subsets.

Processing:

We currently are using an algorithm designed and worked with by Jacob Notbohm from Caltech. This code is closed-source meaning it cannot be altered, but we currently use it for validation of our stacks and viability of sets of images for long-term DVC analysis. Eventually this lab will develop it's own code. If you need to do DVC analysis I have the files on my laptop. See the end of this article for notes on usage.

Here is the current process for analyzing images:

  1. Create a folder with your z-stacks images. They should be named so that each stack has a unique text descriptor and each image within the stack is designated by a number at the end such as: 'name'*.tif. Where 'name' can be anything and * is a number.
  2. Drop the file convert_to_mat into the same folder. Open it and set the intended subset size, number of images per stack, name of input file, name of output file, and the extension type. Run the program and it will automatically create a .mat file with the specifications required for processing in run_dvc.m. Do this for all of your z-stacks, changing the ending number so you get a series of stacks of the form: 'stack'*.mat. The first one should be designated as '1'.
  3. Move the series of .mat files into the same folder as run_dvc.m and set the subset size, stack name, and sub-volume to analyze. Run and it will compile an output .mat database containing x,y,z,u,v,w if all goes well. This means it has correlated if the process completes. You can investigate the output manually to see if there are clear outliers or NaN's. If there are entire columns of NaN's (such as border subsets) you might want to remove them before processing, but you don't need to.
  4. Open the .mat database to make the variables available to matlab. Open process_results.m. Set a few parameters for the processing such as: factor-which is the pixel to length conversion factor (so results are real distances), and if you choose, you can uncomment and run a filtering algorithm which will detect outliers and replace them with NaN's to indicate they are uncorrelated. This can be useful for discovering the results without the influence of uncorrelated subsets. It will record the number of such outliers discovered. Note the the criteria for decorrelation depends on the type of deformation and the criteria may not stay the same depending on your motion and subset size. Run process_results.m.
  5. The program should export strains, displacements, plots, and three summary files. One file will have average strains in each direction and average displacement for all points. (A total average). This is called Totals.txt. Average.txt will contain the displacements averaged at each layer. LocalStrains.txt will contain all the strain data for every point matched (number determined by spacing set in run_dvc.m). The plots will be of strain and displacement for the varying layers.

Supporting files:

  • Plot_Displacement_Linear.m - allows you to calculate a theoretical displacement field and plot against results
  • Resize_Stack.m - Allows you to resize/move an entire stack to virtually impose a deformation field. It will then crop the image back to the original size. Note that this method removes some of the area included in the reference if you are stretching so specify the sub-volume that remained when using run_dvc.
  • convert_to_GS.m - allows you to convert a series of color images or images of another format to an equivalent grayscale form. Alternatively you can copy the conversion part of the code into convert_to_mat.m and do the conversion while storing into .mat form. This usually is better than converting all the images.
  • Sharpen_Image_Set.m - this file sharpens all images by a specified amount. In the beginning it allows you to compare the two to find the desired amount, then converts the entire stack on command.

Instructions for Running DVC Code

Jacob Notbohm, California Institute of Technology, 2012

Note: The theory behind this code is published in C. Franck, S. Hong, S.A. Maskarinec, D. A. Tirrell and G. Ravichandran, Three-dimensional full-field measurements of large deformations in soft materials using confocal microscopy and digital volume correlation, Experimental Mechanics, 47, 427-438 (2007). This version of the code does not include the “stretch correlation,” ie, it is assumed that the subsets only translate; they do not stretch.

The code has 2 files. run_dvc.m loads files, calls the actual DVC code, and saves the results. The actual DVC code is dvc_full.p. Since the documentation in the p-file is hidden, it is copied here:

  • % [u,v,w] = dvc_full(m0,m1,x0,y0,z0,ua,va,wa,w0,inc) %
  • % This function calls the correlation code used in the DVC. %
  • % Developed by Soonsung Hong and Christian Franck
  • % California Institute of Technology, 2006-2007 %
  • % Updated by Jacob Notbohm
  • % California Institute of Technology, 2010 %
  • % INPUTS
  • % m0 = reference stack used in correlation
  • % m1 = current stack used in correlation
  • % x0, y0, and z0 are a set of gridpoints on which to compute displacements
  • % ua, va, and wa are guesses for translation vector (often just zero)
  • % w0 is the subset size
  • % inc==1 for incremental comparison; inc==0 for cumulative comparison %
  • % OUTPUTS
  • % u, v, and w are the displacements at the gridpoints specified by x0,y0,z0
  • To run the DVC, modify the run_dvc.m file to suit your needs. The following notes may be useful:
  • run_dvc is written assuming that each image stack is saved in a separate mat file called ‘stack*.mat’ where * is a number corresponding to the stack number. In the mat files, the only variable is the image stack which is called ‘dec_img.’
  • The code can be run either cumulatively (where the first image stack is the reference for all correlations) or incrementally (where the reference image for each correlation is the previous image). To run cumulatively, set inc=0; to run incrementally, set inc=1.
  • You may have to iterate to determine the best subset size (called d0 in run_dvc) and spacing (called w0 in run_dvc). Note that decreasing the subset spacing by a factor of 2 increases the computation time by a factor of 23=8.
  • run_dvc currently calls dvc_full twice (ie, 2 iterations). While we did not notice improvement in additional iterations, one could easily add an extra iteration.
  • Two example stacks of fluorescent particles scanned with a confocal microscope are included (stack02 is a rigid body translation of stack01). The stacks are 512x512x201 voxels with 32 voxels of zero-padding around the edges.

These are the instructions given by Jacob NotBohm, I will add the following:

  1. Data should be processed as a general set of stacks, representing the same volume in space over time. There must be at least 2 stacks: one for reference and one for analysis.
  2. A .mat file is a database structure used by matlab to store variables and other parameters. In essence, our .mat files have only one variable, which we call dec_img, but any name can be used as long as the code run_dvc.m is altered to express the new name.
  3. I have written supporting files for pre-processing image stacks into the .mat form needed for Jacob's code and for post-processing his exported displacement matrices into strains and averages. These are fairly well-commented and customizable to the individual file forms and sizes needed.
  4. You can use any image type supported by matlab, but note that based on the type you will get multiple channels. Jacob's code assumes a single, intensity channel, which is basically a grayscale or BW representation. This is usually fine, with no loss of information provided that only one light channel was being collected by the confocal or imaging process. Images of more than one color will lose information, but can be processed using matlab's special conversion functions: rgb2gray(), etc.To avoid conversion with the confocal results, export your confocal z-stacks as a raw data, single-channel .tif file. It will automatically save them as single intensity images and name them with ascending numbers as used by my programs. A multi-page .tif file will work as well.
  5. Note that all information must be contained in all the stacks. If parts of the reference stack deform out of view this can cause decorrelation. I've noticed that this correlation failure occurs in more areas than just the original part that moved out of view, so avoid this happening by picking a reference volume that occurs in all deformed volumes. If you have this problem, note that I have altered Jacob's run_dvc.m file to allow you to analyze only a subset of the original reference stack which will exclude data that is uncorrelatable. Note that the program will only analyze an odd number of subsets along each dimension (I'm not sure why).
  6. As he states, the best subset size and spacing may be a process of trial and error. a multiple of 16 is helpful as it also usually a factor of common image sizes. Note that subset size <16 may not provide enough information for quality matching, but the optimum size depends on the particular images, noise, etc. Note that the code is written to analyze a cuboid subset which means if you don't have as many images in the stack as your stated subset size the code will return blank matrices, correctly, identifying that no subsets of that size can be analyzed, since the w-dimension does not have that many pixels of depth. It doesn't tell you this, however, just returns null values. To avoid this I've written convert_to_mat.m to automatically use your specified subset size to zero-buffer in the z-dimension until you get the appropriate depth.
  7. If you get NaN values in your displacement matrices this is fine. This means that either you have a perfect match (identical images according to Jacob) or that the subset was uncorrelatable (my observation)
  8. You should always zero-buffer your images for better behavior, but note that this is already built into my .mat image compiler. Just specify the subset size intended, number of pictures, etc. and the program will zero-buffer until the .mat stack is an even multiple. If it is already an exact multiple in a dimension the program will add an extra value of subset for buffering.

run_dvc.m

dvc_full.p

process_results.m

resize_stack.m

plot_displacement_linear.m

convert_to_gs.m

convert_to_mat.m

resize_stack_1_.m

DokuWiki CC Attribution-Share Alike 4.0 International