Tuesday, May 6, 2014

New Book in Indian Market

Hello Dear Friends, very good for Big Data Side with MairaDB, if you live give it a try, its not promotion, its just sharing a new book

Thursday, January 2, 2014

Big Data with UAP

Every One is learning Big Data today but not able to understand & classify its real method with use, so solve this issue, i like to introduce new book "Getting Started with Greenplum for Big Data Analytics " "http://bit.ly/HYOwrW ", This gives a good practical idea about Greenplum Unified Analytics Platform (UAP) with Hadoop and clustering of real data. This Platform is provide all kind of analysis and their role in Paradigms, with decision and complexity tree.

Friday, December 27, 2013

Fusion of Multi-slice CT scan Images

Problem:- In Pixel Fusion of CT scan main problem is due to multiple re-projection and back-projection operation during image iterative reconstruction. The sinogram restoration algorithms often suffer from noticeable resolution loss especially in the case of constant noise variance. Pixel-level image fusion is to combine visual information contained in multiple source images into an informative fused image without the introduction of distortion or loss of information with Alignments of different wavelengths and frequency with different hardware to perform many different algorithms for perfect fusion in RGB.
The main emphasis of the latest developments in medical imaging is to develop more reliable and capable algorithms which can be used in real time diagnosis of tumours’. Brain tumour is caused due to uncontrolled growth of a mass of tissue, which can be fatal among children and adults.  Depending on the origin and growth, brain tumor can be classified into two types: 1) primary brain tumor is developed at the original site of the tumor 2) secondary brain tumor is the cancer which spreads to the other parts of the body. The detection of brain tissue and tumor in MR images and CT scan images has been an active research area.  Segmenting and detection of specific regions of brain containing the tumor cells is considered to be the fundamental problem in image analysis related to tumor detection. Many segmentation algorithms are implemented based on edge detection on the grey scale images
The contrast pyramid fusion method loses too much source information to obtain a clear, subjective image; the ratio pyramid method produces lots of inaccurate information in the fused version, and the morphological pyramid method creates a large number of artifacts that do not exist in the original source images. Wavelets and their related transform represent another widely used technique in this category. Wavelet transforms provide a framework in which an image is decomposed into a series of coarse-resolution sub-bands and multi-level finer-resolution sub-bands. {3}
Statistical iterative reconstruction (SIR) {2}, methods, by modelling the noise properties of the measurements and imposing adequate regularization within image reconstruction, can achieve a performance superior to other existing methods in terms of noise reduction and noise-resolution tradeoff. A critical problem in SIR is the high computational burden due to the multiple re-projection and back-projection operations during image iterative reconstruction. To overcome this, restoring the ideal sinogram data from acquired noisy one and reconstructing the CT image from the estimated ideal sonogram data is an interesting alternative strategy with computational efficiency and noise-induced artifact suppression.
CT scan is lower the mill-ampere-second (mAs)as low as reasonably achievable in data acquisition with lower-mAs scans (or low-dose scans) will be unavoidably degraded due to the excessive data noise, if no adequate noise control is applied during image reconstruction. For image reconstruction with low-dose scans, sonogram restoration algorithms based on modelling the noise properties of measurement can produce an image with noise-induced artefact suppression, but they often suffer noticeable resolution loss.{7}
Progress on medical image fusion techniques has been made, and various fusion algorithms have been developed. Medical image fusion can be performed at three broad levels: pixel level, feature level, and decision level. Pixel based {2} fusion is performed on a pixel-by-pixel basis, generating a fused image in which information associated with each pixel is selected from a set of pixels in the source images. Medical image fusion at the feature level requires the extraction of salient environment-dependent features, such as pixel intensities, edges, or textures. Decision-level fusion involves merging information at a higher level of   abstraction, combining the results from multiple algorithms to yield a final fused decision {5}. At this level, input images are processed individually for information extraction. The obtained information is then combined by applying decision rules to reinforce a common interpretation.
At the pixel-level of medical image fusion, the simplest method is to average the input images to generate a fused version. This method will heavily degrade the brightness of the input images. The IHS fusion converts a low-resolution color image from the RGB space into the IHS color space.
Wang,et al. {10}proposed the KL-PWLS algorithm to de-correlate data signals along nearby projection views for CT sonogram restoration by employing the KL transform. The adapted KL transform with dimension 3*3 was first applied to account for the correlative information of continuous data sampling along nearby of the sonogram data. Let ÿ1 and y denote the KL transformed components and the corresponding original originals sonogram data in the spatial domain. Then, in the KL domain, the PWLS criterion can be used to estimate the ith KL component ᶈ of ideal sonogram data from the ith KL component ÿ1 of original sonogram data by minimizing the following objective function.
where ˙l is the diagonal variance matrix and can be estimated from the variance    of  the original sinogram data  at detector bin i and view k. The scalar β is a hyper- parameter, di is the the eigen value of the lth KL basic vector, and  is the penalty term. The original sonogram data y has a unique property which can expressed by a relationship between the sample mean and variance:
Where is the incident X-ray intensity along the projection path i,  is the variance of the electronic background noise, and  is the sample mean of estimated by neighbourhood averaging with a 3X3 window.

Research Methodology
The ndiNLM Algorithm:-
A normal-dose Ct image scanned previously may be available in some clinical applications such as CT perfusion Imaging and CT angiography, the previous normal-dose scan can provide a reference image to construct more reasonable non-local weights than those used in original NLM algorithm{17} for low-dose CT image restoration. With this observation, the ndiNLM algorithm was proposed:-
Where is the roughly registered previous normal-dose image aligned with the low-dose objective image, Z(i)=
The proposed sinogram restoration induced non-local means (SR-NLM) algorithm adapts the ndiNLM algorithm to exploit more reasonable similarity information in the FBP image reconstructed from the KL-PWLS restored low-dose sinogram data, instead of the FBP image{17} reconstructed from the original low-dose sinogram data. Specifically, the SR-NLM algorithm contains four major steps:
(a) direct FBP image reconstruction  direct from the original low-dose sinogram data; just putting data for process for 3 step.
(b) sinogram restoration using the KL-PWLS algorithm and FBP image reconstruction  from the KL-PWLS restored sinogram data; sonogram restoration by FBP and pwls values for 3 step
(c) non-local weights construction using the images  and , Due to the suppressed noise and artefact in the image fusion , non-local weights can be better calculated from the image fusion , instead of the low-dose image itself, to improve the non-local weighted average.
Where  is the normalizing factor. The subsets Vi and Vj denote two similarity patch-windows centred at the pixel i in the image  and at the pixel j in the image fusion  respectively. Ni represents the search-window centered at the pixel i in the image .
(d) non-local weighted average using the calculated non-local means weights, after the non local weight construction, according to SR-NLM Image Fusion algorithm can be executed via non-local weighted average operation-

Fig:-Three digital phantoms used for computer simulation studies. (a) The modified clock phantom contains eight inserts with varying contrast (C1: +30%, C2: 7%, C3: 15%, C4: +85%, C5: 30%, C6: +7%, C7: +15%, and C8:85%). Eight ROIs marked by larger squares allow comparison of zoomed images. ROI 1, ROI 2 and background region indicated by small squares allow comparison of the contrast-to-noise ratio. The lines along the edges of the inserts (C1 and C4) allow comparison of the noise-resolution tradeoff; (b) the image of one slice of XCAT phantom with a lesion (contrast of +15%) as indicated by a square. Two ROIs marked by two squares allow visual inspection comparison of zoomed images; and (c) the modified Shepp–Logan phantom with a low-contrast small lesion (contrast of +1.5%) as indicated by the arrow.
Research Issues in Multi-Slice CT scan Image fusion:-
  1. Pixel Solution of ndiNLM Algorithm.
  2. Signal Reduction for wavelength.
  3. Exogenous source priors.
  1. Fusion Solution for Pixel under ndiNLM Algorithm
Markov random fields (MRFs) are used to perform fusion {12} regularization by imposing prior knowledge on the types of admissible images fusion, depth maps, flow fields, and so on. While such models have proven useful for regularizing problems in computer image fusion, MRFs have mostly been limited in three respects:
 (1) They have used very simple neighborhood structures. Most models in low-level vision are based on pairwise graphs, where the potential functions are formulated in terms of pixel fusion differences (image derivatives) between neighboring sites. Fusion are dissected into an assembly of nodes that may correspond to pixels or agglomerations of pixels fusion.  Hidden variables associated with the nodes are introduced into a model designed to “explain” the values (colors) of all the pixels fusion.
(2) In many cases, potentials have remained hand-defined and hand-tuned. Consequently, many MRFs do not necessarily reflect the statistical properties of the image fusion. The direct statistical dependencies between hidden variables are expressed by explicitly grouping hidden variables;
 (3) MRF models have typically not been spatially adaptive, that is, their potentials do not depend on the spatial location within the image fusion with pixel.
Analysis of the marginal statistics of steered derivative filter responses (figure) reveals that while both are heavy-tailed, the derivative orthogonal to the image structure has a much broader histogram than the aligned derivative. The SRF potentials model these steered filter responses using a Gaussian scale mixture (GSM) , which is more flexible than many previous potential functions and is able to capture their heavy-tailed characteristics.
Fig. 2. The clock phantom images reconstructed by different methods and eight zoomed regions  indicated by the marks with C1 to C8 in Fig. 1(a). (a) The conventional FBP image fusion with ramp filter reconstructed from the original sinogram data; (b) the standard FBP image reconstructed from the restored sinogram data by the KL-PWLS algorithm with ˇ = 400; (c) the conventional FBP image restored by the original NLM algorithm with _ = 5.6 ×103; and (d) the reconstructed FBP image restored by the present SR-NLM algorithm with ˇ = 400, _ = 1.4×03. All images are displayed with same window.

  1. Signal Reduction for wavelength:-
In order to evaluate the performance of the present SR-NLM in a more quantitative manner for image fusion, the peak signal-to-noise ratio (PSNR) and normalized mean square error (NMSE) merits were used in this study. They are defined as:
where µ(k) represents the intensity value at the pixel k in the image µ, µphantom(k) represents the intensity value at the pixel k in the ideal phantom image, and K denotes the number of image pixels fusion. max(µphantom) represents the maximum intensity value of the ideal phantom image {17}.


Fig. 3. The horizontal profiles through the center of bone insert (C4) and the dark insert (C5) in the reconstructed clock phantom images fusion corresponding to Fig. 1 & 2 with Markov pixel  fusion over wavelegth  problem solution.
  1. Exogenous source priors for Markov Model to Implement Fusion (for complex image fusion)
If there is prior knowledge that activity is restricted to a volume of interest, the dipoles outside this volume can be masked and the solution will be forced to be inside the specified volume. However this procedure can lead to errors as all data, regardless of origin, will be explained by activity in this volume. An alternative and preferred approach is to use a soft constraint by creating extra components in the set C that specify sources inside the volume of interest. The relationship between an object model (no matter how accurate) and the object's image is a complex one. The appearance of a small patch of a surface is a function of the surface properties, the patch's orientation, the position of the lights and the position of the observer.
Given a model u(x) and an image v(y) we can formulate an imaging equation,
u(T(x))`= F(u(v),q)``
The advantage of this configuration (CT scan with ndiNLM algorithm with Exogenous) is that by separating the two systems axially, possible interference can be minimized, the PET scanner design is not subject to geometric constraints imposed by the bore size of the MR system, and existing PET and MRI systems might be able to be used with relatively little modification. The process of constructing an observation has two separate components. The first component is called a transformation, or poses (T). It relates the coordinate frame of the model, x, to the coordinate frame of the image. The imaging function determines the value of image point u(T(x)). The second component is the imaging function, F(u(x), q).The transformation tells us which part of the model is responsible for a particular pixel of the image. In general a pixel's value may be a function both of the model and other exogenous factors.
For example, when functional CT scan data are available, regional activations can be included as extra components in C, translating the CT scan information into candidate MSP patches within the library (see Henson et al., 2010). These must be soft constraints as one cannot assume that volume showing CT scan responses will necessarily contribute to MEG/EEG data. Note that incorporating prior knowledge in this way does not bias the estimate of source activity — rather it allows the estimate to take non-zero values.
`u(T(x))= µ(phantom), ``F(u(x),q)=NMSE {13} this is new derived algorithm with Morkov model for Pixel solution under this Exogenous I had done the transformation of 3*3 image for the fusion with Eigen values of normalized mean square error (NMSE) with  Max wave length µ(phantom) and `u(T(x)) if the transform value’s from output function values of phantom wavelength u(x),q.