3D/4D Ultrasound System

Chaowei Tan, Jie Ren, Tian Cao, Dong C. Liu

Introduction

Traditional 2D ultrasound images offer clinical information of human tissue and blood flow states in real time and in a 2D slice format. But One disadvantage of 2D ultrasound imaging is the loss of the real anatomy of conventional exams, which can affect the accuracy of diagnostic results. 3D ultrasound, however, allows doctors to view the anatomy in 3D in applications such as OB diagnosis, 3D power Doppler imaging, which are hard to obtain from conventional 2D scans. In general, 3D ultrasound imaging involves the following steps:

    1. Acquisition to acquire 2D image frames.

    2. Reconstruction to construct the canonical volume data from these frames.

    3. Volume rendering (VR) to render the 3D image from the volume data.

    4. 3D editing methods to handle the removal of the unwanted structures that block the region of interest.

Data Acquisition

Freehand Acquisition

In freehand acquisition, the most common way for scanning is that the operator holds a probe and manipulates it in the direction that is perpendicular to the probe, and to obtain the 2D images for 3D volume data. The process is called as linear scanning and also there are fan scanning and rotation scanning[1],

The freehand approach offers advantage of arbitrary acquisition volumes, low cost and compatibility with the full palette of existing 2D probes. But this way needs registration methods to reconstruct volume data in the situation that there is no position sensor mounted on probe.

Volume Probe Acquisition

Volume Probe is bigger and has larger space than normal ones. And it contains a normal ultrasound probe but it could precisely control the probe to sweep like the fan scanning way. So when using volume probe, the data acquisition could be easily implemented, for algorithm designers could get the accurate position information of a sequence of 2D images in 3D coordination. The following is images of volume probe,

Ultrasound Volume Data Reconstruction

The reconstruction methods for volume probe is relatively easy for implementation, so in this page, only algorithms for freehand reconstruction will be described. In freehand methods, the probe movement in axial and lateral direction, which is called “in-plane” motion, could be estimate using traditional image registration technique (e.g. MSAD). And the probe movement in the out-of-plane, elevational direction, could be estimate by the degree of speckle decorrelation. Another problem is that the theory holds only for fully developed speckle. When we use the method directly on the real-tissue big error may occur.Gee et al. has proposed an adaptive scheme that uses the isotropy of the coherent scattering to correct the estimation of the elevational movement.

Sample Probe Motion Estimation Based on Sum of Absolute Difference

  • Elevational Motion Estimation

We use SAD technique instead of normalized correlation confident (NCC) as the measurement of decorrelation. The SAD of two W × H images could be calculated by the follows,

where aij is a pixel value of image a.

In order to reconstruct the volume image, a successive B-scan frame set is captured. First, we put N successive frames into a group, and divide each frame into n × n blocks. For a speckle block (that is the block is covered by fully develop speckle pattern) in the first frame in the group, we calculate its SAD with blocks in the same in-plane position, but of different frames, by the following equation,

where i, j is index of block in x and y direction, k is the step number offset and m is the maximum step number. b(i, j, k, m) represents the average SAD between two frames those are at an interval of k frames. And we could obtain a SAD curve for each step number in the group.

The curve is as Figure 1,

This curve represents the speckle decorrelation rate. The SAD between the first frame and following frames increases as the frame offset increases, and after several frames it will approach a constant. Here we assume the sweep of the probe is smooth and is only to one direction, and we can use the elevational offset at half-maximum (HM) of the curve to estimate the frame offset.

where dz(i, j) is the estimated elevational movement of block Bl(i, j). HMZ(i, j) is the offset distance at half-maximum of the SAD curve of an ideal speckle region at depth of row j. It could be obtained by a calibration sweep in a tissue mimicking phantom. StepNumz(i, j) is the step number to get the half maximum of the SAD curve in a measurement sweep.

  • In-plane Motion Estimation

To estimate the in-plane motion, we could use MSAD technique. But tradition MSAD search could only obtain in-plane movement with accuracy of 1 pixel-size. But in ultrasonic image, the probe movement between frames is usually less than 1 pixel. Therefore the partial pixel-size estimation is needed to improve the accuracy of the motion estimation.

For each speckle block, the first step is to do simple one dimensional search first in x direction and y direction. The in-plane motion is,

where x, y is the original image position of the block, xMSAD, yMSAD is the location of MSAD in x and y direction.

In order to get the more accurate result, we use some curve to fit the SAD curve near the MSAD location in both x and y direction. The minimum position of the curve is assumed to be the final position of the in-plane motion vector. The experiment shows that the SAD curve in both x and y direction is almost symmetric. So we could use a simple symmetric triangle fitting to get the central position as shown in Figure 2.

where Y1, Y2 and Y3 is the SAD value around the MSAD position and δx is the interpolated MSAD position in pixel. It is the same for calculating δy in axial position. And the final in-plane motion is,

In the Figure. 2, the slope of the two line segments is opposite. And the minimum position δx of the function using triangle estimation is,

where Psize is the pixel size of ultrasound image.

Adaptive SAD Curve Correction

Andrew H. Gee et.al have developed an adaptive method to correct the estimate distance in not fully developed speckle region. We use their model and modify it to fit the SAD approach. The model is based on following assumption: There is relationship between elevational speckle decorrelation and lateral or axial speckle decorrelation. If the coherent scattering affects elevational decorrelation, the lateral or axial

decorrelation will also be affected. So in not-fully-developed regions the lateral or axial decorrelation curve could be used to correct the elevational decorrelation curve to improve the accuracy of distance estimation.

In adaptive curve correction method, for fully developed speckle, the pixel of two neighbor b-scan frame could be represent as Ai and Bi, where i is the pixel index. For the coherent scattering region, the two images could be represented as Ai + eBi and Bi + eAi, where e is the coherent coefficient from 0 to 1. e = 0 means there is no coherent scattering and the pixel of the image is Ai and Bi. If e = 1, the two image will Ai + Bi.

For experiments in real-tissue, the SAD curve may be quite different from it in a uniform tissue mimicking phantom. Here ρcoherent is the SAD value from the SAD curve of real tissue, and ρspeckle is the SAD value of phantom. And we could get,

So the lateral direction coherent coefficient el (k) could be obtained with lateral SAD value ρcoherent_l (k) and a reference standard lateral SAD value ρstandard_l (k).

where el (k) is the coherent coefficient at step number k. The axial direction coherent coefficient ea (k) could be obtained in the way. The axial and lateral direction coherent coefficient should be resized and resampled to be applied to elevational direction. In this paper we use

HMz / HMx and HMz / HMy as the resize ratio, where HMi means offset distance at half-maximum of the SAD curve in the direction i. The resampled lateral and axial coherent coefficient is averaged and used in elevational SAD curve correction:

where e(k) is the averaged coherent coefficient, ρcoherent(k) is the original SAD curve and ρspeckle(k) is the corrected SAD value.

Ultrasound Volume Data Volume Rendering

We implement the volume rendering algorithm based on the method mentioned on GPU Gems[2]. In the follows, some basic processes of this volume rendering method will be described (some descriptions, equations and figures on algorithms are cited from GPU Gems[2]).

Sampling Ways and Emission-Absorption Equation

Direct volume rendering methods generate images of a 3D volumetric data set without explicitly extracting geometric surfaces from the data (Levoy 1988). These techniques use an optical model to map data values to optical properties, such as color and opacity (Max 1995). During rendering, optical properties are accumulated along each viewing ray to form an image of the data.

When using GPU to implement VR, volume data is stored as a stack of 2D texture slices or as a single 3D texture object. By comparing with the traditional ray casting VR method, the GPU based method is to create a series of mid images by sampling the volume along all viewing rays and accumulating the resulting optical properties, which is shown as follows,

For the emission-absorption model, the accumulated color and opacity are computed according to the following equation,

where Ci and Ai are the color and opacity assigned by the transfer function to the data value at sample i. Opacity Ai approximates the absorption, and opacity-weighted color Ci approximates the emission and the absorption along the ray segment between samples i and i+ 1.

Texture Based Volume Rendering Using Direct3D

In general, the processes of texture based volume rendering algorithms on GPU could be shown as follows, It could be divided into three stages: (1) Initialize, (2) Update, and (3) Draw. The Initialize stage is usually performed only once. The Update and Draw stages are executed whenever the application receives user input—for example, when viewing or rendering parameters change.

However, some system might not have powerful GPU for the GPU Shader or CUDA implementation, for example, as the following system,

Therefore, based on the idea of the texture based volume rendering, we develop a VR method using Direct3D. We store volume data as 3D texture, and treat the 3D texture as a bounding box and use CPU to calculate the vertexes for 2D sampling images, the process is as follows,

And then using the alpha blending function in Direct3D to implement the emission-absorption process to obtain the final 3D image. Because the way that the alpha blending function blends texture is similar to the emission-absorption model, so this VR method could work well in practical application.

This method does not depend on high quality graphic hardware, and could implement VR with real time speed. But it also has some disadvantages, for example, the method has used the fixed rendering pipeline of D3D, so it might be flexible enough to add other complex algorithms in VR.

3D Editing

Arbitrary Projected Slices Editing Method for Volume Data

In order to get the real time processing requirement, we only implement the arbitrary projected slices editing method for Volume Data. In this arbitrary projected slices algorithm, an arbitrary slice with the axial size of sa and the lateral size of sl can be defined in a slice space (u, v, w) with the origin of (x0, y0, z0) relative to the normal volume coordinates. The normal vector of this slice can be obtained from the given lateral and the axial direction vectors of [lx, ly, lz] and [ax, ay, az]. Therefore, the normal vector (i.e., along the w-axis) will be,

which is also the direction number of the w axis relative to the z axis. Similarly, the direction numbers of the u, v axes relative to the x, y axes are,

The direction cosines (li, mi, ni) of the u, v, w axes relative to the x, y, z axes will be computed as the normalization of the direction numbers,

The transformation from the slice coordinates (u, v, w) to the normal volume coordinates (x, y, z) is now

From the above equation, the pixel values on the (u, v) slice can be estimated from the raw volume data by applying this transformation. Because the transformed voxel may not be at the grid point, we should approximate its value by the tri-linear interpolation. In volume editing, the user can specify an arbitrary slice and draw several ROIs to be removed. And transforms ROI pixels on the initial (u, v) plane to (x, y, z) space and define them as the “base voxels”. Starting from these base voxels we can generate new voxels on the next slice along the normal vector of the slice plane. The new point (xk+1, yk+1, zk+1) corresponding to the slice k + 1 will be derived by,

where d represents the distance between the slice k and the slice k+1. From the above equations, the coordinate transformation will be required only once to define the base voxels and then for slices along (lw, mw, nw).

4D Ultrasound Implementation

4D ultrasound is one of the most important advance function for a color ultrasound system at present. The implementation of 4D ultrasound is based on the above methods discussed before, but it has real time requirement for both data acquisition and 3D image rendering. So the improvement of processing speed is the key for 4D. Referring the basic methods for 3D ultrasound imaging, the details of 4D will not be described.

Experiment and Results

The above methods have already been integrated into a practical ultrasound system. Some results of them will be shown as follows, because the results of 4D are dynamic, it is not convenient to display them, only results on the 3D applications are shown.

    • A fetus' face from a phantom using Freehand 3D

    • A fetus' face from a phantom using Freehand 3D with speckle reduction

    • A fetus' face using Volume Probe

    • A fetus' face using Volume Probe with speckle reduction

    • A fetus' arm from in-vivo data using Volume Probe

Conclusion

    1. We present series of algorithms for a practical 3D/4D ultrasound system.

    2. The freehand and Volume Probe reconstruction algorithm could construct volume data in a qualitative degree, so it could work for diagnosis and other practical utilizations.

    3. The texture based volume rendering method using D3D could offer good and real time rendering effect in hardware performance limited system.

    4. The 3D editing method could remove the unwanted skin and other tissues that block the region of interest, and it could make the observation of users convenient.

Reference

[1] A. Fenster, D. B. Downey, "3D ultrasound imaging: a review," IEEE Trans. Engineering in Medicine and Biology Magazine, vol 15, pp.41-51, December 1996.

[2] Randima Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics. Chapter 39. Nvidia.