|
|
Issue
1, Volume 4, January 2008
Title of the Paper: One
Approach to the Analysis Influence of Change Background Statistical
Parameters on the Capability of Tracking Objects using “Mean Shift” Procedure
DOWNLOAD
FULL PDF
Authors: Dimitrije
Bujakovic, Milenko Andric
Abstract: A quantitative analysis of change background statistics on the
capability of tracking objects using “mean shift” procedure is present in
this paper. Change of background statistics assumed changing of mean of
brightness and changing noise variance in the scene. Quantitative analysis
implies detection error and number of iteration needed for position determination
using “mean shift” procedure.
Keywords:
Object detection, object tracking, background statistical parameters, “mean
shift” procedure, quantitative analysis, detection error, number of
iterations,
Title of the Paper: Modelling
Geomagnetic Activity Data
DOWNLOAD
FULL PDF
Authors: Ernst D.
Schmitter
Abstract: Strong geomagnetic activity is a hazard to electronics and electric
power facilities. Assessment of the actual geomagnetic activity level from
local magnetometer monitoring therefore is of importance for risk assessment
but also in earth sciences and exploration. Wavelet based signal processing
methods are applied to extract meaningful information from magnetic field
time series in a noisy environment. Using a proper feature vector a local
geomagnetic activity index can be derived under not ideal circumstances using
computer intelligence methods. Locally linear radial basis function nets and
self organizing maps are discussed in this context as data based process
models.
Keywords:
geomagnetism, signal processing, wavelets, neuro fuzzy modelling, self
organizing map
Title of the Paper: Adaptive
Approach on Trimulus Color Image Enhancement and Information Theory Based
Quantitative Measuring
DOWNLOAD
FULL PDF
Authors: Zhengmao Ye,
Habib Mohamadian, Yongmao Ye
Abstract: Image enhancement and image clustering are two practical
implementation approaches for pattern recognition with a variety of
engineering applications. In most cases, the actual outcomes of some advanced
image processing approaches will directly affect the decision making, such as
in target detection and medical diagnosis. Among these approaches, image
adaptive contrast stretching is a typical enhancement approach under
conditions of improper illumination and unpleasant disturbances, which adapts
to the intensity distribution of an image. K-means clustering is a typical
segmentation approach to minimize the medium dispersing impact, which
produces the distinctive clusters or layers for representing different
components of the information being detected. In trimulus color systems, each
of three color components takes an independent role along with image
processing procedures. To evaluate actual effects of image enhancement and
image segmentation, quantitative measures should be taken into account rather
than qualitative evaluations exclusively. In this article, quantitative
measures for trimulus color systems are proposed instead of the existing gray
level ones. Considering the gray level image measures, the corresponding true
color RGB component energy, discrete entropy, relative entropy and mutual
information are proposed to measure the effectiveness of color image
enhancement and segmentation techniques.
Keywords:
Image Enhancement, Image Segmentation, Trimulus Color, Energy, Discrete
Entropy, Relative Entropy, Mutual Information, Contrast Stretching, K-means
Clustering
Title of the Paper: Comparison
of Methods to Estimate Individual Tree Attributes Using Color Aerial
Photographs and LiDAR Data
DOWNLOAD
FULL PDF
Authors: Anjin Chang,
Jung Ok Kim, Kiyun Ryu, Yong
Il Kim
Abstract: The main objective of this study was to compare methods to estimate
the number of trees and individual tree height using LiDAR data and aerial
photography. A Korean pine tree study area for these techniques was selected
the methods of watershed segmentation, region-growing segmentation, and
morphological filtering were compared to estimate their accuracy. The
algorithm was initiated by developing a normalized digital surface model
(NDSM). A tree region was then extracted using classification and elimination
errors of the NDSM and the photograph. The NDSM of the tree region was
prefiltered and information about individual trees was extracted by
segmentation and morphological methods. By using local maximum filtering, the
tree height was obtained. Field observations were compared with the predicted
values for accuracy assessment. The accuracy test showed the watershed
segmentation algorithm to be the best estimator for tree modeling. Regression
models for the study area explained 80% of the tree numbers and 89% of the
heights.
Keywords:
Aerial photography, LiDAR, Segmentation, Tree modeling
Issue
2, Volume 4, February 2008
Title of the Paper: Analysis
of Neuromuscular Disorders Using Statistical and Entropy Metrics on Surface
EMG
DOWNLOAD
FULL PDF
Authors: Rok Istenic,
Prodromos A. Kaplanis, Constantinos S. Pattichis, Damjan Zazula
Abstract: This paper introduces the surface electromyogram (EMG) classification
system based on statistical and entropy metrics. The system is intended for
diagnostic use and enables classification of examined subject as normal,
myopathic or neuropathic, regarding to the acquired EMG signals. 39 subjects
in total participated in the experiment, 19 normal, 11 myopathic and 9
neuropathic. Surface EMG was recorded using 4-channel surface electrodes on
the biceps brachii muscle at isometric voluntary contractions. The recording
time was only 5 seconds long to avoid muscle fatigue, and contractions at
five force levels were performed, i.e. 10, 30, 50, 70 and 100 % of maximal
voluntary contraction. The feature extraction routine deployed the wavelet
transform and calculation of the Shannon
entropy across all the scales in order to obtain a feature set for each
subject. Subjects were classified regarding the extracted features using
three machine learning techniques, i.e. decision trees, support vector
machines and ensembles of support vector machines. Four 2-class
classifications and a 3-class classification were performed. The scored
classification rates were the following: 64±11% for normal/abnormal, 74±7%
for normal/myopathic, 79±8% for normal/neuropathic, 49±20% for
myopathic/neuropathic, and 63±8% for normal/myopathic/neuropathic.
Keywords:
surface electromyography, neuromuscular disorders, neuropathy, myopathy,
isometric voluntary contraction, entropy, wavelet transform
Title of the Paper: Vision-Based
Distance and Area Measurement System
DOWNLOAD
FULL PDF
Authors: Cheng-Chuan
Chen, Ming-Chih Lu, Chin-Tun Chuang, Cheng-Pei Tsai
Abstract: The objective of this paper is to enable CCD camera for area
measuring while recording images simultaneously. Based on an established
relationship between pixel number and distance in this paper, we can derive
the horizontal and vertical length of a targeted object, and subsequently
calculate the area covered by the object. Because of the advantages
demonstrated, the proposed system can be used for large-area measurements.
For example, we can use this system to measure the size of the gap in the
embankments during flooding, or the actual area affected by the landslides.
Other applications include the surveying of ecosystems by inspecting how
widely spread is a certain type of life form. For places which are difficult
or impossible to reach, this system can be particularly useful in performing
area measurements. Experiments conducted in this paper have indicated that
different shooting distances and angles do not affect the measuring results.
Keywords:
CCD camera, area measurement system, laser beams, pixels.
Title of the Paper: Decomposition
of Multi-Exponential and Related Signals – Functional Filtering Approach
DOWNLOAD
FULL PDF
Authors: Vairis Shtrauss
Abstract: Decomposition of multi-exponential and related signals is
generalized as an inverse filtering problem on a logarithmic time or
frequency scale, and discrete-time filters operating with equally spaced data
on a logarithmic scale (geometrically spaced on linear scale) are proposed
for its implementation. Ideal prototypes, algorithms and types of filters are
found for various time- and frequency-domain mono-components. It is disclosed
that the ill-posedness in the decomposition originates as high sampling-rate
dependent noise amplification coefficients arising from the large areas under
the increasing frequency responses. A novel regularization method is
developed based on the noise transformation regulation by filter bandwidth
control, which is implemented by adaptation of the appropriate sampling rate.
Algorithm design of decomposition filters is suggested joining together signal
acquisition, regularization and discrete-time filter implementation. As an
example, decomposition of a frequency-domain multi-component signal is
considered by a designed filter.
Keywords:
Decomposition, Multi-Component Signals, Distribution of Time Constants,
Functional Filters, Logarithmic Sampling, Ill-posedness, Regularization
Issue
3, Volume 4, March 2008
Title of the Paper: Comparative
study of several Fir Median Hybrid Filters for blink noise removal in
Electrooculograms
DOWNLOAD
FULL PDF
Authors: Marcelino
Martinez, Emilio Soria, Rafael Magdalena, Antonio Jose Serrano, Jose David Martín, Joan Vila
Abstract: The presence of a kind of impulsive noise due to eye blinks is
typical during the acquisition of electrooculograms. This paper describes a
comparative study of several algorithms used to remove the blink noise in the
electroculogram preserving the sharp edges in the signal produced by the
so-called saccadic eye movements. Median filters (MF) and several types of
Fir Median Hybrid Filters (FMH) have been analyzed. Two types of real
electrooculogram register with saccadic movements in controlled position were
used to test the performance of the pre-processing filters (sampling rate
20Hz). The filtered signals were later processed with a saccadic eye movement
detector algorithm in order to detect changes in the sensitivity and positive
predictive value. The results show that neither FMH filters nor WFMH filters
produce better results than median filters, in this particular study. The
highest averaged values of sensitivity and positive predictive value are
obtained by using a median filter of length I=6 samples (S=96.22 %,
V++=95.42%) and the variant SWFMH of the same length (S=96.27 %, V++=91.91%).
Although the differences in detection rates are not meaningful between these
filters, median filters obtain slightly higher rates of saccades detection
than SWFMH, but a reduction in computational burden is obtained by using FHM
variants.
Keywords:
electrooculogram, median filter, fir median hybrid filter, blink, saccadic,
eye movement, eye tracking
Title of the Paper: The
Detection of Gear Noise Computed by Integrating the Fourier and Wavelet
Methods
DOWNLOAD
FULL PDF
Authors: Niola Vincenzo,
Quaremba Giuseppe, Forcelli Aniello
Abstract: This paper presents a new gearbox noise detection algorithm based
on analyzing specific points of vibration signals using the Wavelet
Transform. The proposed algorithm is compared with a previouslydeveloped
algorithm associated with the Fourier decomposition using Hanning windowing.
Simulation carried on real data demonstrate that the WT algorithm achieves a
comparable accuracy while having a lower computational cost. This makes the
WT algorithm an appropriate candidate for fast processing of noise gear box.
Keywords:
Signal processing, gear noise, Wavelet Transform, multiresolution analysis.
Title of the Paper: Object-Oriented
Analysis Applied to High Resolution Satellite Data
DOWNLOAD
FULL PDF
Authors: Vincenzo
Barrile, Giuliana Bilotta
Abstract: The aim of this contribute is to examine an application of Object
Oriented Image Analysis on very high resolution data, on Ikonos images -
multispectral and panchromatic – of Bagnara Calabra, in the province of
Reggio Calabria. Our objectives are to show as an automatic analysis as we
implemented in a unitary package for segmentation and classification Neuro
Fuzzy – with a minimal manual participation - can get a good classification
also in presence of high and very high resolution data of small cities, where
higher is an error possibility.
Keywords:
Object-Oriented Image Analysis - Morphological Based Segmentation – Fuzzy
Classification.
Title of the Paper: Word
and Triphone Based Approaches in Continuous Speech Recognition for Tamil
Language
DOWNLOAD
FULL PDF
Authors: R. Thangarajan,
A. M. Natarajan, M. Selvam
Abstract: Building a continuous speech recognizer for the Indian language
like Tamil is challenging due to the unique inherent features of the language
like long and short vowels, lack of aspirated stops, aspirated consonants and
many instances of allophones. Stress and accent vary in spoken language from
region to region. But in read Tamil speech, stress and accents are ignored.
There are three approaches to continuous speech recognition (CSR) based on
the sub-word unit viz. word, phoneme and syllable. Like other Indian
languages, Tamil is syllabic in nature. Pronunciation of words and sentences
is strictly governed by set of rules. Many attempts have been made to build
continuous speech recognizers for Tamil for small and restricted tasks.
However medium and large vocabulary CSR for Tamil is relatively new and not
explored. In this paper, the authors have attempted to build a Hidden Markov
Model (HMM) based word and triphone acoustic models. The objective of this
research is to build a small vocabulary word based and a medium vocabulary
triphone based continuous speech recognizers for Tamil language. In this
experimentation, a word based Context Independent (CI) acoustic model for 371
unique words and a triphone based Context Dependent (CD) acoustic model for
1700 unique words have been built. In addition to the acoustic models a
pronunciation dictionary with 44 base phones and trigram based statistical
language model have also been built as integral components of the linguist.
These recognizers give very good word accuracy for trained and test sentences
read by trained and new speakers.
Keywords:
Acoustic Model, Context Dependent, Context Independent, Continuous Speech Recognition,
Hidden Markov Model, Tamil language, Triphone.
Title of the Paper: Genetic
Algorithms based Adaptive Search Area Control for Real Time Multiple Face
Detection using Neural Networks
DOWNLOAD
FULL PDF
Authors: Stephen Karungaru,
Minoru Fukumi, Takuya Akashi, Norio Akamatsu
Abstract: Fast and automatic face detection from visual scenes is a vital
preprocessing step in many face applications like recognition,
authentication, analysis, etc. While detection of a single face can be
accomplished with good accuracy, multiple faces detection in real time is
more challenging not only because of different face sizes and orientations,
but also due to limits of the processing power available. In this paper, we
propose a real time multiple face detection method using multiple neural
networks and an adaptive search area control method base on genetic
algorithms. Although, neural networks and genetic algorithms may not be
suitable for real time application because of their long processing times, we
show that high detection accuracies and fast speeds can be achieved using
small sized effective neural networks and a genetic algorithm with a small
population size that requires few generations to converge. The proposed
method subdivides the face into several small regions, each connected to an
individual neural network. The subdivision guarantees small size networks and
presents the ability to learn different face regions features using
region-specialized input coding methods. The genetic algorithm is used during
the real time search to extract possible face samples from face candidates.
The fitness of the face samples is calculated using the neural networks. In
the successive frames, the search area is adaptively controlled based on the
information inherited from the proceeding frames. To prove the effectiveness
of our approach we performed real time simulation using an inexpensive USB
camera.
Keywords:
Adaptive search area control, Genetic Algorithms, Neural networks, Real-time
processing, feature extraction.
Title of the Paper: The
Fractal Dimension Correlated to the Bone Mineral Density
DOWNLOAD
FULL PDF
Authors: Khaled Harrar, Latifa Hamami
Abstract: Osteoporosis is a condition of decreased bone mass. This leads to
fragile bones which are at an increased risk for fractures, more often, it
affects postmenopausal women. In this paper we propose a study of
osteoporosis with the fractal dimension. After an introduction to the theory
and fractal dimension, we use the box counting method for the segmentation of
radiographic images, the study of the influence of range size boxes on the
fractal dimension will be investigated, and the correlation between a
reference dimension and bone mineral density. Other imaging techniques will
be given in order to see the results of the application of the method on
these types of images.
Keywords:
Box counting method, Osteoporosis, Fractal dimension, Radiographic images,
Side length, Threshold.
Title of the Paper: Rotating
Projection Algorithm for Computer Tomography of Discrete Structures
DOWNLOAD
FULL PDF
Authors: A. Grebennikov,
J. G. Vazquez Luna, T. Valencia Perez, M. Najera Enriquez
Abstract: Traditional computer tomography requires scanning the object to
obtain a lot of projections. Then the image reconstruction is realized on the
base of some mathematical model that corresponds to the concrete physical
field producing this tomography. For example, in the X-rays tomography the
inversion of the Radon transform is used. It seems necessary for difficult
structures and can be realized in sufficiently fast manner. We consider in
this paper the situation, when the investigating object has the “Discrete
Structure”, so its reconstruction consists only in localization of some
point-wise elements with different characteristic inside of the homogeneous
(or quasi homogeneous) substance in the considered region. We propose for
this case the Rotating Projection algorithm with a little number of scanning
angles. This algorithm do not requires application of some inverse
transforms. It simplifies the image reconstruction. Proposed approach is
faster in its computer realization, gives possibility to reduce the time of
the radiation treatment. The good properties of the developed algorithm are
demonstrated on simulated numerical examples.
Keywords:
Image reconstruction, rotating projection, computer tomography
Issue
4, Volume 4, April 2008
Title of the Paper: Ships
Classification Basing On Acoustic Signatures
DOWNLOAD
FULL PDF
Authors: Andrzej Zak
Abstract: The paper presents the technique of artificial neural networks used
as classifier of hydroacoustic signatures generated by moving ship. The main
task of proposed solution is to classify the objects which made the
underwater noises. Firstly, the measurements were carried out dynamically by
running ship past stationary hydrophones, mounted on tripods 1 m above the
sea bottom. Secondly to identify the source of noise the level of vibration
were measured on board by accelerometers, which were installed on important
components of machinery. On the base of this measurement there was determined
the sound pressure level, noise spectra and spectograms, transmission of
acoustic energy via the hull into water. More over it was checked by using
coherence function that components of underwater noise has its origin in
vibrations of ship’s mechanisms. Basing on this research it was possible to
create the hydroacoustic signature or so called “acoustic portrait” of moving
ship. Next during the complex ships’ measurements on Polish Navy Test and Evaluation
Acoustic Range
hydroacoustic noises generated by moving ship were acquired. Basing on these results
the classifier of acoustic signatures using artificial neural network was
worked out. From the technique of artificial neural networks the Kohonen
networks which belongs to group of self organizing networks where chosen to
solve the research problem of classification. The choice was caused by some
advantages of mentioned kind of neural networks like: they are ideal for
finding relationships amongst complex sets of data, they have possibility to
self expand the set of answers for new input vectors. To check the
correctness of classifier work the research in which the number of right
classification for presented and not presented before hydroacoustic
signatures were made. Some results of research were presented on this paper.
Described method actually is extended and its application is provided as
assistant subsystem for hydrolocations systems of Polish Naval ships.
Keywords:
Self-Organizing Map, Kohonen’s neural networks, Hydroacousitc signatures,
Classification.
Title of the Paper: Data
Fusion and Topology Control in Wireless Sensor Networks
DOWNLOAD
FULL PDF
Authors: Vrinda Gupta, Rajoo Pandey
Abstract: The design of large-scale sensor networks interconnecting various
sensor nodes has spurred a great deal of interest due to its wide variety of
applications. Data fusion is a critical step in designing a wireless sensor
network as it handles data acquired by sensory devices. Wireless sensor
networks allow distributed sensing and signal processing while collaborating
during energy efficient operations. Wireless sensor networks are battery
powered; therefore prolonging the network lifetime through an energy aware
node organization is highly desirable. The main goal of a topology control
scheme in wireless sensor networks is to reduce power consumption in order to
extend network lifetime. Our aim is to provide a better understanding of the
current research issues in this field. The paper provides a more detailed
look at some existing data fusion and topology management algorithms. The
most important design issues of data fusion and topology control are also
highlighted.
Keywords:
Wireless sensor networks, data aggregation, data fusion, topology control,
protocols.
Title of the Paper: Single
Channel Audio Source Separation
DOWNLOAD
FULL PDF
Authors: Bin. Gao, W. L.
Woo, S. S. Dlay
Abstract: Blind source separation is an advanced statistical tool that has
found widespread use in many signal processing applications. However, the
crux topic based on one channel audio source separation has not fully
developed to enable its way to laboratory implementation. The main idea
approach to single channel blind source separation is based on exploiting the
inherent time structure of sources known as basis filters in time domain that
encode the sources in a statistically efficient manner. This paper proposes a
technique for separating single channel recording of audio mixture using a
hybrid of maximum likelihood and maximum a posteriori estimators. In addition,
the algorithm proposes a new approach that accounts for the time structure of
the speech signals by encoding them into a set of basis filters that are
characteristically the most significant.
Keywords:
Single Channel Separation, Blind Source Separation, Characteristic Filters,
ML, MAP
Title of the Paper: Association-Based
Image Retrieval
DOWNLOAD
FULL PDF
Authors: Arun Kulkarni,
Harikrisha Gunturu, Srikanth Datla
Abstract: With advances in the computer technology and the World Wide Web
there has been an explosion in the amount and complexity of multimedia data
that are generated, stored, transmitted, analyzed, and accessed. In order to
extract useful information from this huge amount of data, many content-based
image retrieval (CBIR) systems have been developed in the last decade. A
typical CBIR system captures image features that represent image properties
such as color, texture, or shape of objects in the query image and try to
retrieve images from the database with similar features. Recent advances in
CBIR systems include relevance feedback based interactive systems. The main
advantage of CBIR systems with relevance feedback is that these systems take
into account the gap between the high-level concepts and low-level features
and subjectivity of human perception of visual content. In this paper, we
propose a new approach for image storage and retrieval called
association-based image retrieval (ABIR). We try to mimic human memory. The
human brain stores and retrieves images by association. We use a generalized
bi-directional associative memory (GBAM) to store associations between
feature vectors. The results of our simulation are presented in the paper.
Keywords:
Content-Based Image Retrieval, Association-Based Image Retrieval,
Bi-directional Associative Memories.
Title of the Paper: Blind
Tamper Detection in Audio using Chirp based Robust Watermarking
DOWNLOAD
FULL PDF
Authors: O. Farooq, S.
Datta, J. Blackledge
Abstract: In this paper, we propose the use of ‘chirp coding’ for embedding a
watermark in audio data without generating any perceptual degradation of
audio quality. A binary sequence (the watermark) is derived using energy
based features from the audio signal and chirp coding used to embed the
watermark in audio data. The chirp coding technique is such that the same
watermark can be derived from the original audio signal as well as recovered
from the watermarked signal. This not only enables the ‘blind’ recovery of
the watermark, but also provides a solution for deriving two independent
extraction processes for the watermark from which it is possible to ensure
the authenticity of audio data and any mismatch indicating that the data may
have been tampered with. To evaluate the robustness of the proposed scheme,
different attacks such as compression, filtering, sampling rate alteration,
for example, have been simulated. The results obtained reflect the high
robustness of the watermark method used and is effectiveness in detecting any
data tampering that may have occurred. For perceptual transparency of the
watermark, Perceptual Assessment of Audio Quality (PEAQ ITU-R BS.1387) on
Speech Quality Assessment Material (SQAM) has been undertaken and an average
of -0.5085 Objective Difference Grade achieved.
Keywords:
Chirp Coding, Robust Audio Watermarking, Self-Authentication, Tamper
Detection, Wavelet Transform.
Title of the Paper: Polyphonic
Music Separation based on the Simplified Energy Splitter
DOWNLOAD
FULL PDF
Authors: Kristof Aczel, Istvan Vajk
Abstract: In the past years many approaches have been developed that target
the separation of polyphonic music material into independent source signals. Due
to lack of information on the original signals it is currently practically
impossible to extract the original waveforms from their mixture. Thus all of
the approaches target the reconstruction of signals that are at least in some
way close to the original. For that purpose common features of harmonic
sounds are usually exploited. This paper proposes a system that uses
frequency-domain instrument models as prior knowledge for reinserting
information needed for the separation. The system provides over 18dB Signal
to Distortion Ratio for two simultaneous notes, which slowly degrades as the
level of polyphony increases. This makes the approach highly applicable both
as a standalone separation tool and the ground of other signal manipulation
methods.
Keywords:
sound separation, instrument print, polyphonic music, energy split
Title of the Paper: Syllable-Based
Automatic Arabic Specch Recognition in Noisy-Telephone Channel
DOWNLOAD
FULL PDF
Authors: Mohamed Mostafa
Azmi, Hesham Tolba, Sherif Mahdy, Mervat Fashal
Abstract: The performance of well-trained speech recognizers using high
quality full bandwidth speech data is usually degraded when used in real
world environments. In particular, telephone speech recognition is extremely
difficult due to the limited bandwidth of transmission channels. In this
paper, we concentrate on the telephone recognition of Egyptian Arabic speech
using syllables. Arabic spoken digits were described by showing their
constructing phonemes, triphones, syllables and words. Speaker-independent
hidden markov models (HMMs)-based speech recognition system was designed
using Hidden markov model toolkit (HTK). The database used for both training
and testing consists from forty-four Egyptian speakers. In clean environment,
experiments show that the recognition rate using syllables outperformed the
rate obtained using monophones, triphones and words by 2.68%, 1.19% and 1.79%
respectively. Also in noisy telephone channel, syllables outperformed the
rate obtained using monophones, triphones and words by 2.09%, 1.5% and 0.9%
respectively. Comparative experiments have indicated that the use of
syllables as acoustic units leads to an improvement in the recognition
performance of HMM-based ASR systems in noisy environments. A syllable unit
spans a longer time frame, typically three phones, thereby offering a more
parsimonious framework for modeling pronunciation variation in spontaneous
speech. Moreover, syllable-based recognition has relatively smaller number of
used units and runs faster than word-based recognition.
Keywords:
Speech recognition, syllables, Arabic language, HMMs, Noisy-channel.
Title of the Paper: Design
and Implementation of Digital FIR Equiripple Notch Filter on ECG Signal for
Removal of Power line Interference
DOWNLOAD
FULL PDF
Authors: Mahesh S.
Chavan, Ra. Agarwala, M. D. Uplane
Abstract: Filtering of power line interference is very meaningful in the measurement
of biomedical events recording, particularly in the case of recording signals
as weak as the ECG. The available filters for power line interference either
need a reference channel or regard the frequency as fixed 50/60Hz. Methods of
noise reduction have decisive influence on performance of all
electro-cardio-graphic (ECG) signal processing systems. This work deals with
problems of power line interference reduction. In the literature of the last
twenty years several solutions of removal of power line interference on
electrocardiogram (ECG) signals can be found. Some analogue and digital
approaches to this problem are presented and its properties, advantages and
disadvantages are shown. Present paper deals with design and development of
digital FIR equiripple filter. The basic ECG has the frequency range from
.5Hz to 100Hz. Artifacts plays the vital role in the processing of the ECG
signal. It becomes difficult for the Specialist to diagnose the diseases if
the artifacts are present in the ECG signal. In the present work notch filter
is designed and applied to the ECG signal containing power line noise.
Complete design is performed with FDA tool in the Matlab. The equiripple
notch filter designed is having higher order due to which increase in the computational
complexity observed. For accessing real time ECG the related instrumentation
has been developed in the laboratory. The result shows the ECG signal before
filtering and after filtering with their frequency spectrums which clearly
indicates the reduction of the power line interference in the ECG signal.
Keywords:
Electrocardiogram, Simulation, Equiripple Filter, Real Time Filtering. Noise
reduction.
Title of the Paper: Novel
Statistical Approach to Blind Recovery of Earth Signal and Source Wavelet
using Independent Component Analysis
DOWNLOAD
FULL PDF
Authors: Aws Al-Qaisi,
W. L. Woo, S. S. Dlay
Abstract: This paper provides a new statistical approach to blind recovery of
both earth signal and source wavelet given only the seismic traces using
independent component analysis (ICA)
by explicitly exploiting the sparsity of both the reflectivity sequence and
the mixing matrix. Our proposed blind seismic deconvolution algorithm
consists of three steps. Firstly, a transformation method that maps the
seismic trace convolution model into multiple inputs multiple output (MIMO)
instantaneous ICA
model using zero padding matrices has been proposed. As a result the nonzero
elements of the sparse mixing matrix contain the source wavelet. Secondly,
whitening the observed seismic trace by incorporating the zero padding
matrixes is conducted as a pre-processing step to exploit the sparsity of the
mixing matrix. Finally, a novel logistic function that matches the sparsity
of reflectivity sequence distribution has been proposed and fitted into the
information maximization algorithm to obtain the demixing matrix.
Experimental simulations have been accomplished to verify the proposed
algorithm performance over conventional ICA
algorithms such as Fast ICA and JADE algorithm. The mean square error (MSE)
of estimated wavelet and estimated reflectivity sequence shows the
improvement of proposed algorithm.
Keywords:
blind deconvolution, seismic signal processing, sparse ICA,
information maximization algorithm fast ICA
algorithm, JADE algorithm, zero padding matrixes
Title of the Paper: Analysis
of Heart Rate Variation Filtering Using LMS Based Adaptive Systems
DOWNLOAD
FULL PDF
Authors: S. Seyedtabaii
Abstract: Heart Rate Variability (HRV) is widely used as an index of human
autonomic nervous activity. HRV is composed of two major components: high
frequency respiratory sinus arrhythmia (RSA) and low frequency sympathetic
components. The ratio of LF/HF is viewed as an index of human autonomic
balance, so the low frequency sympathetic and the high frequency
parasympathetic components of an ECG R-R interval must be adequately
separated. Adaptive filters can isolate the low frequency, enabling the
attainment of more accurate heart rate variability measures. For the raised
case, this paper suggests an efficient (short size) case based model and
illustrates its performance in adaptive filtering of heart rate signal. This
method renders analogous results to what a higher order conventional FIR
model adaptive filter may yield. The strength of the proposed model comes out
of its ability in tracking the phase difference variation between the
reference and the main signal of an adaptive filtering system. This capability,
then is shown, that leads to the increase in the convergence rate of the LMS
algorithm in HRV adaptive filtering. Simulation results supporting the
proposed concept are presented.
Keywords:
Adaptive filter, All pass filter, FIR model, First order equalizer, HRV
filtering, Rate of convergence, Least Mean Squares.
Issue
5, Volume 4, May 2008
Title of the Paper: A
Low Complexity Approach to Turbo Equalizer
DOWNLOAD
FULL PDF
Authors: Aruna Tripathy,
Sant Saran Pathak, Saswat Chakrabarti
Abstract: The turbo equalizers (TEQ) proposed in literature utilize
equalizers based on trellis, soft Wiener filters. The resulting complexity of
these equalizers is exponential and cubic in terms of the sampled channel
impulse response (CIR). The interference cancellation based decision feedback
filter based equalizers requires adaptation of two filters simultaneously. In
this paper, a low complexity equalizer is proposed that neither uses a
trellis nor a Wiener filter. The proposed equalizer utilizes a soft
interference cancellation (SIC) technique that uses the log likelihood ratio
(LLR) available at the matched filter (MF) using all the coded bits in a
given block of data. The MF output is justified as Gaussian distributed and
the LLRs are computed accordingly. This is fed as the apriori to the decoder
after suitable deinterleaving. The soft estimates for the bits are used to
form an estimate of the interference with the help of perfect channel tap
knowledge at the decoder output. This estimate of interference is subtracted
from the MF output giving the SIC framework. We call it a soft decision
feedback equalizer (SDFE).The SDFE bypasses the filters completely resulting
in a linear complexity in CIR. Simulation results over four different
channels show that the receiver performance improves with iterations and a
gap of 1-3 dB is observed from the coded AWGN bound depending on the channel
type. Two different TEQs based on namely soft output Viterbi algorithm (SOVA)
and the Wiener filter respectively are compared with the SDFE.
Keywords:
SIC, Wiener Filter, LLR, SDFE, SOVA
Title of the Paper: Semi-Hierarchical
Based Motion Estimation Algorithm for the Dirac Video Encoder
DOWNLOAD
FULL PDF
Authors: M. Tun, K. K.
Loo, J. Cosmas
Abstract: Having fast and efficient motion estimation is crucial in today’s
advance video compression technique since it determines the compression
efficiency and the complexity of a video encoder. In this paper, a method
which we call semi-hierarchical motion estimation is proposed for the Dirac
video encoder. By considering the fully hierarchical motion estimation only
for a certain type of inter frame encoding, complexity of the motion
estimation can be greatly reduced while maintaining the desirable accuracy.
The experimental results show that the proposed algorithm gives two to three
times reduction in terms of the number of SAD calculation compared with
existing motion estimation algorithm of Dirac for the same motion estimation
accuracy, compression efficiency and PSNR performance. Moreover, depending
upon the complexity of the test sequence, the proposed algorithm has the
ability to increase or decrease the search range in order to maintain the accuracy
of the motion estimation to a certain level.
Keywords:
Semi-Hierarchical, Motion Estimation, Dirac Wavelet Video Codec
Title of the Paper: FastICA
Algorithm for the Separation of Mixed Images
DOWNLOAD
FULL PDF
Authors: Arti Khaparde,
M. Madha Vilatha, M. B. L. Manasa, P. Anil Babu, S. Pradeep Kumar
Abstract: Independent component analysis is a generative model for observed
multivariate data, which are assumed to be mixtures of some unknown latent
variables. It is a statistical and computational technique for revealing
hidden factors that underlies set of random variable measurements of signals.
A common problem faced in the disciplines such as statistics, data analysis,
signal processing and neural network is finding a suitable representation of
multivariate data. The objective of ICA
is to represent a set of multidimensional measurement vectors in a basis
where the components are statistically independent. In the present paper we
deal with a set of images that are mixed randomly. We apply the principle of
uncorrelatedness and minimum entropy to find ICA. The original images are then retrieved
using fixed point algorithm known as FastICA algorithm and compared with the
original images with the help of estimated error. The outputs from the
intermediate steps of algorithm such as PCA, Whitening matrix, Convergence of
algorithm and dewhitening matrix are also discussed.
Keywords:
PCA, ICA,
Statistical independence, Non-gaussianity, Maximum Likelihood, Feature
Extraction.
Title of the Paper: A
Study on Ultrasonic Signals Processing Generated From Automobile Engine Block
Using Statistical Analysis
DOWNLOAD
FULL PDF
Authors: M. Z. Nuawi, S.
Abdullah, A. R. Ismail, R. Zulkifi, M. K. Zakaria, M. F. H. Hussin
Abstract: The development of statistical analysis has played an important
part in studying large data that captured from engine block as apart of
engine monitoring and diagnose . Within this paper the application of
statistical analysis was introduced by utilizing Kurtosis, I-kaz coefficient,
and Crest Factor and Skewness parameter. There is potential that these
statistical parameters could serve as pattern recognition to identify engine
type and characteristic. The study was performed in two stages. The first
stage is an experimental process that uses two three-cylinder automobile 845
cc and 850 cc engines and two four-cylinder automobile 1468 cc and 1784 cc
engines which run under idle condition. In the second stage, the plots of signal’s
statistical parameter based on the engine type were done accordingly. As a
result, the plot of the statistical parameter against I-kaz coefficient shows
an excellent classification pattern. The pattern was useful in determining
engine type for signal confirmation and engine fault detection.
Keywords:
Statistical Analysis, Signal Processing, Ultrasonic signal
Title of the Paper: A
Novel Postfiltering Technique Using Adaptive Spectral Decomposition for
Quality Enhancement of Coded Speech
DOWNLOAD
FULL PDF
Authors: Hassan Farsi
Abstract: An adaptive time-domain postfiltering technique based on the
synthesis LP filter factorisation is proposed. Information is gathered about
the relation between the LP filter poles and formants for this factorisation.
This technique shapes the main formant differently from the other formants.
Pole locations representing the main formant are modified and optimum shaping
constants for each formant are searched to make more narrower main formant
bandwidth while maintaining other formant information and more attenuation in
valley regions.
Keywords:
Speech spectrum, postfilter, formant frequency, synthesis filter.
Title of the Paper: Real
Time Generation of the Quinquenary Pulse Compression Sequence using FPGA
DOWNLOAD
FULL PDF
Authors: N. Balaji, K.
Subba Rao, M. Srinivasa Rao
Abstract: Quinquenary codes have been widely used in radar and communication
areas, but the design of Quinquenary codes with good merit factor is a nonlinear
multivariable optimization problem, which is usually difficult to tackle. To
get the solution of above problem many global optimization algorithms like
genetic algorithm, simulated annealing, and tunneling algorithm were reported
in the literature. All these optimization algorithms have serious drawbacks
of non guaranteed convergence, slow convergence rate and require large number
of evaluations of the objective function. To overcome these drawbacks,
recently we proposed an efficient VLSI architecture for identification of the
Quinquenary Pulse compression sequences. Integrating this architecture with
the currently proposing architecture provides an efficient real time Hardware
solution for identification and generation of the Quinquenary Pulse compression
sequences. This paper describes the real time generation of the Quinquenary
Pulse compression sequences using Field Programmable Gate Array (FPGA). In
this paper an effort is made for the generation of the Pulse compression
sequences using an efficient VLSI architecture. The Proposed VLSI
architecture is implemented on the FPGA as it provides the flexibility of
reconfigurability and reprogrammability.
Keywords:
Pulse compression, Quinquenary sequence, VLSI architecture, FPGA, Merit
Factor, Behavioral Simulation.
Title of the Paper: Computationally
Efficient Algorithm for Fuzzy Rule-Based Enhancement on JPEG Compressed Color
Images
DOWNLOAD
FULL PDF
Authors: Camelia Popa,
Mihaela Gordan, Aurel Vlaicu, Bogdan Orza, Gabriel Oltean
Abstract: In the past few years the resolution of images increased and the
requirement for large storage space and fast process, directly in the
compressed domain, becomes essential. Fuzzy rule-based contrast enhancement,
is a well-known rather simple approach with good visual results. As any fuzzy
algorithm, it is by default nonlinear, thus not straightforward applicable on
the JPEG bitstream data – zig-zag ordered quantized DCT (Discrete Cosine
Transform) coefficients. Because of their nonlinear nature the fuzzy
techniques don’t have yet a well-defined strategy for their implementation in
the compressed domain. In this paper, we propose an implementation strategy
suitable for single input – single output Takagi-Sugeno fuzzy systems with
trapezoidal shaped input membership function, directly in the JPEG compressed
domain. The fuzzy sets parameters are adaptively chosen by analyzing the
histogram of the image data in the compressed domain, in order to optimally
enhance the image contrast. The fuzzy rule-based algorithm requires some
threshold comparisons, for which an adaptive implementation, taking into
account the frequency content of each block in the compress domain JPEG image
is proposed. This guarantees the minimal error implementation at minimum
computational cost.
Keywords:
Compressed domain processing, Discrete Cosine Transform (DCT), nonlinear
operation, fuzzy rule-based contrast enhancement, fuzzy sets, color image
enhancement.
Title of the Paper: Algorithms
for Discrete Quadratic Time–Frequency Distributions
DOWNLOAD
FULL PDF
Authors: John M. O’
Toole, Mostefa Mesbah, Boualem Boashash
Abstract: Time–frequency distributions (TFDs) are computationally costly to
compute. We address this problem by presenting algorithms to reduce the computational
load for computing TFDs. Before we can compute the TFDs, however, we first
must define a discrete version of the TFD. Defining a discrete TFD (DTFD) is,
unfortunately, not a straightforward process—for example, a popular DTFD
definition does not satisfy all desirable mathematical properties that are
inherent to the continuous TFD. In this paper, we define a new DTFD
definition, the DTFDC. This definition is closely related to another DTFD
definition which we recently proposed, the DTFD-B. The DTFD-B and DTFD-C
satisfy all desirable properties. We provide algorithms for both these
definitions and show that the DTFD-C requires only 50% of the computational
complexity and memory required to compute the DTFDB.
Keywords:
Discrete time–frequency distributions (DTFD), discrete Wigner–Ville
distributions (DWVD), discrete-time signal processing (DSP), time–frequency
signal analysis, algorithms, computational load, fast Fourier transforms
(FFTs)
Title of the Paper: Image
Compression using Neural Networks and Haar Wavelet
DOWNLOAD
FULL PDF
Authors: Adnan Khashman, Kamil Dimililer
Abstract: Wavelet-based image compression provides substantial improvements
in picture quality at higher compression ratios. Haar wavelet transform based
compression is one of the methods that can be applied for compressing images.
An ideal image compression system must yield good quality compressed images
with good compression ratio, while maintaining minimal time cost. With
Wavelet transform based compression, the quality of compressed images is
usually high, and the choice of an ideal compression ratio is difficult to
make as it varies depending on the content of the image. Therefore, it is of
great advantage to have a system that can determine an optimum compression
ratio upon presenting it with an image. We propose that neural networks can
be trained to establish the non-linear relationship between the image
intensity and its compression ratios in search for an optimum ratio. This
paper suggests that a neural network could be trained to recognize an optimum
ratio for Haar wavelet compression of an image upon presenting the image to
the network. Two neural networks receiving different input image sizes are
developed in this work and a comparison between their performances in finding
optimum Haar-based compression is presented.
Keywords:
Optimum Image Compression, Haar Wavelet Transform, Neural Networks
Title of the Paper: Interference
Reduction in ECG using Digital FIR Filters based on Rectangular Window
DOWNLOAD
FULL PDF
Authors: Mahesh S.
Chavan, R. A. Agarwala, M. D. Uplane
Abstract: Coronary heart disease (CHD) is the leading cause of death for both
men and women in the all over the world and India too. CHD is caused by a
narrowing of the coronary arteries that supply blood to the heart, and often
results in a heart attack. Each year, about millions man kind suffers from
heart attack. About maximum of those heart attacks are fatal. About half of
those deaths occur within 1 hour of the start of symptoms and before the
person reaches the hospital. A heart attack is a medical emergency.
Hospitalization is required and possibly intensive care. ECG signal is very
important signal in the cardiology. Different artifacts are the reason behind
the corruption of the signal care should be taken to avoid the interferences
in the ECG. The work is in that direction. Present paper deals with the
design of the FIR filter using rectangular window. Basically three filters
are designed namely low pass filter high pass filter and notch filter. All
the filters are cascaded also. These filters are applied on the ECG signal in
the real time manner. For the real time application the 711B add-on card has
been used. Results clearly indicate that there is noise reduction in the ECG
signal. A Comparative Results are Provided in the paper.
Keywords:
Rectangular window, real time ECG processing, mathlab Simulink.
Issue
6, Volume 4, June 2008
Title of the Paper: Off-Line
Cursive Handwritten Tamil Character Recognition
DOWNLOAD
FULL PDF
Authors: R. Jagadeesh
Kannan, R. Prabhakar
Abstract: In spite of several advancements in technologies pertaining to
Optical character recognition, handwriting continues to persist as means of
documenting information for day-to-day life. The process of segmentation and
recognition pose quiets a lot of challenges especially in recognizing cursive
hand-written scripts of different languages. The concept proposed is a
solution crafted to perform character recognition of hand-written scripts in
Tamil, a language having official status in India,
Sri Lanka, and Singapore.
The approach utilizes discrete Hidden Markov Models (HMMs) for recognizing
off-line cursive handwritten Tamil characters. The tolerance of the system is
evident as it can overwhelm the complexities arise out of font variations and
proves to be flexible and robust. Higher degree of accuracy in results has
been obtained with the implementation of this approach on a comprehensive
database and the precision of the results demonstrates its application on
commercial usage. The methodology promises to present a simple and fast
scaffold to construct a full OCR system extended with suitable pre-processing.
Keywords:
Optical Character Recognition (OCR), Cursive Script Recognition, Handwritten
Script Recognition, Segmentation, Offline Recognition, Hidden Markov Model
(HMM)
Title of the Paper: Spline
Wavelet Packets Application: Doppler Signal Analysis During Operation Time
DOWNLOAD
FULL PDF
Authors: E. Serrano,
R. O. Sirne, M. Fabio, A. Viegener, C. E. D' Attellis, J. Guglielmone
Abstract: Wavelet methods play a significant role in signal processing. They
are multifaceted tools and many choices and alternatives are open. Particularly,
the Discrete Transform leads us to decompose the given signal in a filter
bank, or time scale-scheme called multiresolution analysis. Then, the wavelet
coefficients reflect the signal information in an efficient structure.
Wavelet packets, in a second and deeper analysis, refine the scheme and they
give us more frequency precision. In this article, we applied these
techniques in a spline framework to process Doppler radar signals.
Over-the-horizon-Radars operate in the High Frequency band; they are able to
detect targets beyond the horizon and are employed in many applications. The
radar operates for long periods of time without interruption; this requires
analyzing the echo signal during the time of operation. For this case, we
propose an adaptation of Mallat’s algorithm; the method compute the wavelet’s
coefficients of consecutive intervals of the signal in a multiresolution
analysis framework. The coefficients are calculated and used efficiently to
estimate the radial velocity of the target over the time.
Keywords:
Wavelets, wavelet packet, multiresolution, spline, radar, signal
segmentation.
Title of the Paper: Performance
Evaluation of Motion Estimation in DCT Transform Domain
DOWNLOAD
FULL PDF
Authors: Petrescu
Catalin-Dumitru, Stefanoiu Dan, Lupu Ciprian
Abstract: Motion estimation is one of the most important steps in video
compression algorithms. It reduces temporal redundancy present in frame
sequences and allows a better compression of video material. Most of the
actual video compression algorithms use “block matching” methods which
operate on the bitmap form of the frames. This paper presents a method for
computing the values of DCT coefficients of a block of pixels positioned on
certain coordinates over four adjacent blocks using only the DCT coefficients
of these four blocks. Performance of this method is analyzed for both integer
and non-integer displacements. Also, an equivalent of the full-search
algorithm translated in 2D-DCT domain is presented.
Keywords:
motion estimation, block matching, discrete cosine transform, video
compression, match function
Title of the Paper: Inharmonic
Dispersion Tunable Comb Filter Design Using Modified
Iir Band
Pass Transfer Function
DOWNLOAD
FULL PDF
Authors: Varsha Shah, R. S. Patil
Abstract: An excitation/filter system of Inharmonic sound synthesis signal is
presented with an application to piano, a stiff string instrument. Specific
features of the piano string important in wave propagation, dispersion due to
stiffness, frequency dependent losses and presence of phantom partials are
included in the proposed model. The modified narrow bandpass filter is used
as a basic building block in modeling the vibrating structure. The parallel
bank of narrow band pass filters is used to model the dispersion. The center
frequencies of narrow bandpass filters can be tuned to follow the same
relation as the partial frequencies of piano tone. Novel loss filter is
designed to implement frequency dependent losses. The resulting model is
called as Inharmonic Dispersion Tunable Comb filter.
Keywords:
Bandpass , Bandstop, Dispersion, Inharmonicity, Synthesis
Issue
7, Volume 4, July 2008
Title of the Paper: Slovenian
Spontaneous Speech Recognition and Acoustic Modeling of Filled Pauses and
Onomatopoeas
DOWNLOAD
FULL PDF
Authors: Andrej
Zgank, Tomaz Rotovnik, Mirjam Sepesy Maucec
Abstract: This paper is focused on acoustic modeling for spontaneous speech
recognition. This topic is still a very challenging task for speech
technology research community. The attributes of spontaneous speech can
heavily degrade speech recognizer’s accuracy and performance. Filled pauses
and onomatopoeias present one of such important attributes of spontaneous
speech, which can give considerably worse accuracy. Although filled pauses
don’t carry any semantic information, they are still very important from the
modeling perspective. A novel acoustic modeling approach is proposed in this
paper, where the filled pauses are modeled using the phonetic broad classes,
which corresponds with their acoustic-phonetic properties. The phonetic broad
classes are language dependent, and can be defined by an expert or in a
data-driven way. The new filled pauses modeling approach is compared with
three other implicit filled pauses modeling methods. All experiments were
carried out using a context-dependent Hidden Markov Models based speech
recognition system. For training and evaluation, the Slovenian BNSI Broadcast
News speech and text database was applied. The database contains manually
transcribed recordings of TV news shows. The evaluation of the proposed
acoustic modeling approach was done on a set of spontaneous speech. The
overall best filled pauses acoustic modeling approach improved the speech
recognizer’s word accuracy for 5.70% relatively in comparison to the baseline
system, without influencing the recognition time.
Keywords:
speech recognition, acoustic modeling, filled pauses, onomatopoeas, Slovenian
spontaneous speech, broadcast news, HMM
Title of the Paper: Fault
Characterisation and Classification Using Wavelet and Fast Fourier Transforms
DOWNLOAD
FULL PDF
Authors: E. E Ngu, K.
Ramar, R. Montano, V. Cooray
Abstract: In order to improve the power quality maintenance and reliability
of power supply, different types of faults on the transmission line namely:
open-circuit (OC), short-circuit (SC), high impedance faults (HIF) and the
fault caused by direct lightning strike (LS) have been investigated in this
paper. The disturbances have been modelled and simulated using a well-known
transient simulation tool - Alternative Transient Program/ Electromagnetic
Transient Program (ATP/EMTP) and the resulting data are then imported into
MATLAB for the investigation on the traveling wave (TW) reflection pattern
and harmonic behaviour . Study on the characteristics of the faults in terms
of their corresponding frequency spectrum, the polarities of the
incident-wave and reflected-wave has been performed and the possibility to
differentiate the type of fault is explored. For this purpose, the fault on
the wave has been created at the moment when the voltage signal reaches its
peak and also when it is close to zero. Both, Wavelet Transform (WT) and Fast
Fourier Transform (FFT) methods have been used to analyze the transient
signals generated by the fault. Model of the network used in this study is
taken from [1]-[2].
Keywords:
WT, FFT, ATP/EMTP, current reflection pattern, and spectrum analysis
Title of the Paper: 3D
Techniques used for Conservation of Museum Patrimony
DOWNLOAD
FULL PDF
Authors: A. Chioreanu,
N. Paul, A. Vlaicu, B. Orza
Abstract: The paper presents the implementation of a 3D acquisition system
intended to be used as a tool in the conservation of folk heritage objects.
We developed a computational efficient and cost effective 3D reconstruction
system used for acquiring, reconstructing and presenting the 3D shape of
heritage objects. The proposed solution for 3D reconstruction is based on a
phase shifting fringe projection algorithm. This paper presents a simple
analysis of fringe pattern reconstruction models as well as the details of
our solution. The result proves that the proposed method is suited for such
applications and is more widely applicable.
Keywords: OCC,
Panoramic images, Fringe pattern, 3D reconstruction, Digital library
Title of the Paper: Homogeneous
Pin-Through-Hole Component Inspection Using Fringe Tracking
DOWNLOAD
FULL PDF
Authors: K. Sundaraj
Abstract: Automated visual inspection (AVI) systems are playing important
roles in ensuring manufacturing quality in the electronic industry especially
in the assemblage of printed circuit boards (PCB). Most existing AVIs that
are used for PCB inspection are categorized as non-contact inspection systems
consisting of a single overhead camera. Such systems, which can be manual or
automated, are incapable of detecting 3D pin-through-hole (PTH) component
placement defects and reporting them. By considering an assembled PCB as a
textured surface with a predefined depth map, we propose to apply an angled
fringe projection to detect defects in PTH component placement. It has been
found that angled fringe projection can be used for surface analysis by
applying phase shifting and phase unwrapping obtained from several images.
However, the turnover time for PCB inspection is very crucial in the
electronic industry. In other words, an alternative improved method that
speeds up the inspection process is always desirable. This paper describes a
method of applying an angled fringe projection for 3D height measurement
using a single captured image and a direct triangulation technique. The main
focus of this paper has been made on the development of a fringe tracking
algorithm and its practical implementation details. This algorithm allows us
to obtain the depth map of the surface under analysis with just one image.
The simulated data and calibration process of the tracking algorithm are
discussed and an experimental result is given for Peripheral Component
Interconnect (PCI) component insertion in computer motherboards. With proper
system calibration and accurate image processing, we demonstrate the
successful manipulation of a structured collimated light source for height measurement
using a single captured image.
Keywords:
Automated Visual Inspection, Fringe Projection, PCB Inspection.
Issue
8, Volume 4, August 2008
Title of the Paper: Video
Target Tracking by using Competitive Neural Networks
DOWNLOAD
FULL PDF
Authors: Ernesto
Araujo, Cassiano R. Silva, Daniel J. B. S. Sampaio
Abstract: A target tracking algorithm able to identify the position and to pursuit
moving targets in video digital sequences is proposed in this paper. The
proposed approach aims to track moving targets inside the vision field of a
digital camera. The position and trajectory of the target are identified by
using a neural network presenting competitive learning technique. The winning
neuron is trained to approximate to the target and, then, pursuit it. A
digital camera provides a sequence of images and the algorithm process those
frames in real time tracking the moving target. The algorithm is performed
both with black and white and multi-colored images to simulate real world
situations. Results show the effectiveness of the proposed algorithm, since
the neurons tracked the moving targets even if there is no pre-processing
image analysis. Single and multiple moving targets are followed in real time.
Key–Words:
Image motion, Target Tracking, Neural Network, Video Digital Camera,
Computational Intelligence.
Title of the Paper:
Estimating Parameters of Sinusoids from Noisy Data Using Bayesian Inference
with Simulated Annealing
DOWNLOAD
FULL PDF
Authors: Dursun Ustundag, Mehmet Cevri
Abstract: In this paper, we consider Bayesian analysis proposed by Brett
horst for estimating parameters of the corrupted signals and incorporate it
with a simulated annealing algorithm to obtain a global maximum of the
posterior probability density of the parameters. Thus, this analysis offers
different approach to find parameter values through a directed, but random,
search of the parameter space. For this purpose, we developed a Mathematical
code of this Bayesian approach and used it for recovering sinusoids corrupted
by random noise. The simulation results support the effectiveness of the
method.
Key-Words:
Bayesian Statistical Inference, Simulated Annealing, Parameter Estimations,
Optimization, Spectral Analysis, Signal Processing.
Title of the Paper:
Nonlinear Extension of Inverse Filters for Decomposition of Monotonic
Multi-Component Signals
DOWNLOAD
FULL PDF
Authors: Vairis
Shtrauss
Abstract: - The article is devoted to improving quality of decomposition of
monotonic multi-component timeand frequency-domain signals. Decomposition
filters operating with data sampled at geometrically spaced times or
frequencies (at equally spaced times or frequencies on a logarithmic scale)
are combined with artificial neural networks. A nonlinear processing unit,
which can be considered as a deconvolution network or a nonlinear
decomposition filter, is proposed to be composed from several linear
decomposition filters with common inputs, which outputs are nonlinearly
transformed, multiplied by weights and summed. One of the fundamental
findings of this study is a square activation function, which provides some
useful features for the decomposition problem under consideration. First,
contrary to conventional activation functions (sigmoid, radial basis
functions) the square activation function allows to recover sharper peaks of
distributions of time constants (DTC). Second, it ensures physically
justified nonnegativity for the recovered DTC. Third, the square activation
function transforms the Gaussian input noise into the nonnegative output
noise with specific probability distribution having the standard deviation
proportional to the variance of input noise, which, in most practical cases
when noise level in the data is relatively low, increases radically the noise
immunity of the proposed nonlinear algorithms. Practical implementation and
application issues are described, such as network training, choice of initial
guess, data normalization and smoothing. Some illustrative examples and
simulations are presented performed by a developed deconvolution network,
which demonstrate improvement of quality of decomposition for a frequency-domain
multi-component signal.
Key-Words:
- Decomposition, Monotonic Multi-Component Signals, Distribution of Time
Constants, Decomposition Filters, Square Activation Function, Deconvolution
Networks.
Title of the Paper:
Automatic Real Time Localization of Frowning and Smiling Faces under
Controlled Head Rotations
DOWNLOAD
FULL PDF
Authors:
Yulia Gizatdinova, Jouni Erola, Veikko Surakka
Abstract: The aim of the present study was to develop a new method for fast
localization of the face from streaming colour video. In the method, the face
is located by analyzing local properties of four facial landmarks, namely,
regions of eyes, nose, and mouth. The method consists of three stages. First,
the face like skin colored image region is segmented from the background and
transformed into the grey scale representation. Second, the cropped image is
convolved with Sobel operator in order to extract local oriented edges at 16
orientations. The extracted edges are further grouped to form regions of
interest representing candidates for facial landmarks. The orientation
portraits, which define the distribution of local oriented edges inside the
located region, are matched against the edge orientation model to verify the
existence of the landmark in the image. The located landmarks are spatially
arranged into the face–like constellations. Finally, the best face like
constellation of the landmark candidates is defined by a new scoring
function. The test results showed that the proposed method located neutral,
frowning, and smiling faces with high rates in real time from facial images
under controlled head rotation variations.
Key Words
: Face localization, Facial landmarks, Sobel edge detection, Frontal view
geometrical face model, Facial expressions, Head rotations.
Title of the Paper: An
Iterative Algorithm for Automatic Fitting of Continuous Piecewise Linear
Models
DOWNLOAD
FULL PDF
Authors: Miguel
A. Garcia, Francisco Rodriguez
Abstract: Continuous piecewise linear models constitute useful tools to extract
the basic features about the patterns of growth in complex time series data.
In this work, we present an iterative algorithm for continuous piecewise
regression with automatic change-points estimation. The algorithm requires an
initial guess about the number and positions of the change-points or hinges,
which can be obtained with different methods, and then proceeds by
iteratively adjusting these hinges by displacements similar to those of
Newton algorithm for
function root finding. The algorithm can be applied to high volumes of data,
with very fast convergence in most cases, and also allows for sufficiently
close hinges to be identified, thus reducing the number of change-points, and
so resulting in models of low complexity. Examples of applications to feature
extraction from remote sensing vegetation indices time series data are
presented.
Key–Words:
Continuous piecewise regression, Segmented regression, Multiple change-point
models, Remote sensing, NDVI, MODIS.
Title of the Paper: Rules
and Feature Extraction for Micro calcifications Detection in Digital
Mammograms Using Neuro-Symbolic Hybrid Systems and Undecimated Filter Banks
DOWNLOAD
FULL PDF
Authors: Osslan
Osiris Vergara Villegas, Humberto De Jesus Ochoa Dominguez, Vianey Guadalupe
Cruz Sanchez, Efren David Gutierrez Casas, Gerardo Reyes Salgado
Abstract: - In this paper, we present a Neuro-Symbolic Hybrid System
methodology to improve the recognition stage of benignant or malignant micro
calcifications in mammography. At the first stage, we use five different
undecimated filter banks in order to detect the micro calcifications. The
micro calcifications appear as a small number of high intensity pixels
compared with their neighbors. Once the microcalcifications were detected, we
extract rules in order to obtain the image features. At the end, we can
classify the micro calcification in one of three sets: benign, malign, and normal.
The results obtained show that there is no a substantial difference in the
number of detected micro calcification among the several filter banks used
and the NSHS methodology proposed can improve, in the future, the results of
micro calcification recognition.
Key-Words:
- Breast cancer, Micro calcifications detection, Undecimated filter bank,
NSHS.
Title of the Paper:
Testing of Image Segmentation Methods
DOWNLOAD
FULL PDF
Authors: I. V. Gribkov, P. P. Koltsov, N. V. Kotovich,
A. A. Kravchenko, A. S. Koutsaev, A. S. Osipov, A. V. Zakharov
Abstract: Digital image segmentation is broadly used in various image
processing tasks. A large amount of image segmentation methods gives rise to
the problem of of method’s choice, most adequate for practical purposes. In
this paper, we develop an approach which allows quantitative and qualitative
estimation of segmentation programs. It consists in modeling both difficult
and typical situations in image segmentation tasks using special sets of
artificial test images. The description of test images and testing procedures
are given. Our approach clears up specific features and applicability limits
of four segmentation methods under examination.
Key-Words:
- image processing, energy minimization, image segmentation, ground truth,
testing, performance evaluation.
Title of the
Paper: Satellite Sub-Pixel Rainfall Variability
DOWNLOAD
FULL PDF
Authors:
Eric W. Harmsen, Santa Elizabeth Gomez Mesa, Edvier Cabassa, Nazario D.
Ramirez-Beltran, Sandra Cruz Pol, Robert J. Kuligowski, Ramon Vasquez
Abstract: - Rain gauge networks
are used to calibrate and validate quantitative precipitation estimation (QPE)
methods based on remote sensing, which may be used as data sources for
hydrologic models. The typical approach is to adjust (calibrate) or compare
(validate) the rainfall in the QPE pixel with the rain gauge located within
the pixel. The QPE result represents a mean rainfall over the pixel area,
whereas the rainfall from the gauge represents a point, although it is
normally assumed to represent some area. In most cases the QPE pixel area is
millions of square meter in size. We hypothesize that some rain gauge
networks in environments similar to this study (i.e., tropical coastal),
which provide only one rain gauge per remote sensing pixel, may lead to error
when used to calibrate/validate QPE methods, and that consequently these
errors may be propagated throughout hydrologic models. The objective of this
paper is to describe a ground-truth rain gauge network located in western Puerto Rico which will be available to test our
hypothesis. In this paper we discuss observations from the rain gauge
network, but do not present any QPE validation results. In addition to being
valuable for validating satellite and radar QPE data, the rain gauge network
is being used to test and calibrate atmospheric simulation models and to gain
a better understanding of the sea breeze effect and its influence on
rainfall. In this study, a large
number of storms (> 60) were evaluated between August 2006 and August
2008. The area covered by the rain gauge network was limited to a single GOES-12
pixel (4 km x 4 km). Five-minute and total storm rainfall amounts were
spatially variable at the sub-pixel scale. The average storm rainfall from
20% of the 120 possible rain gauge-pairs was found to be significantly
different at the 5% of significance level, indicating significant rainfall variation
at the sub-pixel scale. The average coefficient of determination (r2), describing
the goodness of fit of a linear model relating rain gauge pairs, was 0.365,
further suggesting a significant degree of variability at the satellite
sub-pixel scale. Although there were several different storm types identified
(localized, upper westerly trough, tropical easterly wave, tropical westerly
trough, cold front and localized with cold front), there did not appear to be
any relationship between storm type and the correlation patterns among the
gauges.
Key-Words: - satellite
pixel, rainfall variability, QPE, rain gauge, radar, validation, hydrologic
modeling.
Title of the Paper:
A
Locally Tuned Nonlinear Technique for Color Image Enhancement
DOWNLOAD
FULL PDF
Authors:
Saibabu Arigela, Vijayan K. Asari
Abstract: - An innovative technique for
the enhancement of digital color images captured under extremely nonuniform
lighting conditions is proposed in this paper. The key contributions of this
technique are adaptive intensity enhancement, contrast enhancement and color
restoration. Simultaneous enhancement of extreme dark and bright intensity
regions in an image is performed by a specifically designed Locally Tuned
Sine Non- Linear (LTSN) function. The intensity of each pixel’s magnitude is
tuned based on its surrounding pixels to accomplish contrast enhancement.
Retrieval of the color information from the enhanced intensity image is
achieved by a linear color restoration process which is based on the
chromatic information of the input image. Experimental evaluations show that
the proposed algorithm can be effectively used for improving the visibility
of night time surveillance video sequences with frames having extreme bright
and dark regions.
Key-Words:
- dynamic range compression, intensity transformation, image enhancement,
adaptive intensity enhancement, contrast enhancement, sine nonlinearity
Issue
9, Volume 4, September 2008
Title of the
Paper:
New
Aspects in Numerical Representations Involved in DNA Repeats Detection
DOWNLOAD
FULL PDF
Authors:
Petre G. Pop
Abstract: The presence of
repeated sequences is a fundamental feature of biological genomes. The
detection of tandem repeats is important in biology and medicine as it can be
used for phylogenic studies and disease diagnosis. A major difficulty in
identification of repeats arises from the fact that the repeat units can be
either exact or imperfect, in tandem or dispersed, and of unspecified length.
Many of the methods for detecting repeated sequences are part of the digital
signal processing field. These methods involve the application of a kind of
transformation. Applying a transform technique requires mapping the symbolic
domain into the numeric domain in such a way that no additional structure is
placed on the symbolic sequence beyond that inherent to it. Therefore the
numerical representation of genomic signals is very important. This paper
presents results obtained by combining grey level spectrograms with two novel
numerical representations to isolate position and length of DNA repeats.
Keywords:
Genomic Signal Processing, Sequence Repeats, DNA Representations, Fourier
analysis, Spectrograms
Title of the
Paper:
Image
Transmission through Incoherent Optical Fiber Bundle: Methods for Optimization
and Image Quality Improvement
DOWNLOAD
FULL PDF
Authors:
O. Demuynck, J. M. Menendez
Abstract: Artificial vision,
in spite of all theoretical and practical progress accomplished during last
thirty years, still cannot be employed technically in some hazardous and
strong industrial areas, where conditions are such that cameras would not
operate properly. Possible alternatives on such problem are the quite recent
IP65 and IP67 industrial cameras, and associated connectors, protected by an
anticorrosive, waterproof, and high temperatures resistive carcass employing a
dedicated electronic, in addition robust to almost 3G accelerations.
Nevertheless, such cameras are still very expensive compared to conventional
industrial cameras and would still not be enough in hardest conditions
(explosive gas or dust environment) or in electromagnetic interferences
environments. A good alternative in extreme conditions is the use of optical
fiber bundle. Since such cable only transmits light, they intrinsically have
electromagnetic interferences immunity and, depending on the fiber material,
could be exposed to very high temperature (over 1000║C for sapphire fibers for
example), could be employed in almost all corrosive environment, and are
totally submersible. Nevertheless, the coherent optical fiber bundles are very
expensive, and for large distances could be non-competitive facing the other
hardware solutions (armor plating, electromagnetic interferences isolation).
Thus, the best suitable option to develop a competitive system in those
particular cases is the use of incoherent optical fiber bundle (IOFB),
nowadays just employed for illumination tasks. Image transmission in this case
is not direct but require a calibration step that we discuss in this paper.
Improvement of the noisy resulting quality image is also exposed here,
achieved by experimental post calibration methods. We propose in this work a
new calibration method of incoherent optical fiber bundle (IOFB) for image
transmission purpose. Firstly, we present the calibration method (an
enhancement of previously published calibration methods), some resulting
reconstructed images and a discussion on its quality improvement employing
simple denoising methods assisted by low pass filters (smooth filter, resizing
method,à) . We finally depict the two new post calibration methods, and its
associated noise generator physical phenomena we want to treat. Those two post
calibration method are: correction of the input plane optical aberrations and
extraction of a unique pixel (almost centered) for each fibers. We finally
present some resulting images that demonstrate how it efficiently refine the
reconstruction Look Up Table (LUT) used for the output image reconstruction
improving image quality, image contrast but also reconstruction processing
time.
Keywords: Image
transmission, Incoherent optical fiber bundle, Hazardous environment,
Calibration
Title of the
Paper:
Activity
Index Variance as an Indicator of the Number of Signal Sources
DOWNLOAD
FULL PDF
Authors:
Rok Istenic, Damjan Zazula
Abstract: In this paper we
introduce a novel technique that can be used as an indicator of the number of
active signal sources in convolutive signal mixtures. The technique is
designed so that the number of sources is estimated using only recorded
signals and some marginal information, such as possible minimum and maximum
triggering frequencies of sources, but no information on mixing matrix, other
parameters of signal sources, etc. Our research is based on the convolution
kernel compensation method (CKC), which is a blind source separation method.
First, a correlation matrix of the recorded signals is estimated. Next, a
measure of the global activity of the signal sources, called activity index,
is introduced. The exact analytical model of the activity index variance was
derived for the purpose of the estimation of the number of signal sources.
Using the analytical model, the number of active signal sources can be
estimated if some a priori marginal information is available. We evaluated
these marginal parameter values in extensive simulations of compound signals.
The number of sources, their lengths, signal-to-noise ratio, source
triggerings, and the number of measurements were randomly combined in
preselected ranges. By using the established marginal parameter values and
increasing extension factors, the model of the activity index variance was
deployed to estimate the number of signal sources. The estimation results
using synthetic signal mixtures are very promising.
Keywords: Compound
signals, estimation of the number of sources, correlation matrix, convolutive
signal mixture, variance model, convolution kernel compensation
Title of the
Paper:
Object
Detection and Segmentation Algorithm Implemented on a Reconfigurable Embedded
Platform Based FPGA
DOWNLOAD
FULL PDF
Authors:
Slami Saadi, Hamza Mekki, Abderrezak Guessoum
Abstract: In this article, we
present a mixed software/hardware Implementation on a Xilinx’s Microblaze Soft
core based FPGA platform. The reconfigurable embedded platform designed to
support an important algorithm in image processing which is region color image
segmentation for detecting objects based on RGB to HSL transformation. The
proposed work is implemented and compiled on the embedded development kit
EDK6.3i and the synthesis software ISE6.3i available with Xilinx Virtex-II
FPGA using C++ language. The basic motivation of our application to radio
isotopic images and neutron tomography is to assist physicians in diagnostics
by taking out regions of interest. The system is designed to be integrated as
an extension to the nuclear imaging system implemented around our nuclear
research reactor. The proposed design can significantly accelerate the
algorithm and the possible reconfiguration can be exploited to reach a higher
performance in the future, and can be used for many image processing
applications.
Keywords: Color
images, Segmentation, Reconfigurable, Embedded, FPGA
Issue
10, Volume 4, October 2008
Title of the
Paper:
An
Automatic System for Urban Road Extraction from Satellite and Aerial Images
DOWNLOAD
FULL PDF
Authors:
S. Idbraim, D. Mammass, D. Aboutajdine, D. Ducrot
Abstract: We present in this
paper an automatic system of urban road extraction from satellite and aerial
imagery. Our approach is based on an adaptive directional filtering and a
watershed segmentation. The first stage consists of an automatic procedure
which adapts filtering of each block band to the dominant direction(s) of
roads. The choice of the dominant direction(s) is made from a criterion based
on the calculation of a factor of direction of detection. The second stage is
based on watershed algorithm applied to a Shen-Castan gradient image. This
process provides a decision map allowing correcting the errors of the first
stage. A ratio of surface on perimeter is used to distinguish among all
segments of the image those representing probably roads. Finally, in order to
avoid gaps between pieces of roads, the resulting image follows a treatment,
based on proximity and colinearity, for linking segments.
The proposed approach is tested on common scenes of Landsat ETM+ and aerial
imagery of the city of Agadir in Morocco. The experimental results show
satisfactory values of completeness and correctness and are very prominsing.
Keywords: Road
extraction; Satellite and aerial imagery; Urban areas; Adaptive directional
filtering; Segmentation; Grouping; Evaluation
Title of the
Paper:
A
New Fast Forecasting Technique using High Speed Neural Networks
DOWNLOAD
FULL PDF
Authors:
Hazem M. El-Bakry, Nikos Mastorakis
Abstract: Forecasting is an
important issue for many different applications. In this paper, a new
efficient forecasting technique is presented. Such technique is designed by
using fast neural networks (FNNs). The new idea relies on performing cross
correlation in the frequency domain between the input data and the input
weights of neural networks. It is proved mathematically and practically that
the number of computation steps required for the proposed fast forecasting
technique is less than that needed by conventional neural-based forecasting.
Simulation results using MATLAB confirm the theoretical computations. The
proposed fast forecasting technique increases the prediction speed and at the
same time does not affect the predication accuracy. It is applied
professionally for erythemal ultraviolet irradiance prediction.
Keywords: Fast
Neural Network, Cross Correlation, Frequency Domain, Combined Neural
Classifiers, Information Fusion, erythemal UV irradiance, total ozone
Title of the
Paper:
Asymmetric
Ratio and FCM based Salient Channel Selection for Human Emotion Detection
using EEG
DOWNLOAD
FULL PDF
Authors:
M. Rizon, M. Murugappan, R. Nagarajan, S. Yaacob
Abstract:
Electroencephalogram (EEG) is one of the most reliable physiological signals
used for detecting the emotional states of human brain. We propose Asymmetric
Ratio (AR) based channel selection for human emotion recognition using EEG.
Selection of channels reduces the feature size, computational load
requirements and robustness of emotions classification. We address this crisis
using Asymmetric Variance Ratio (AVR) and Amplitude Asymmetric Ratio (AAR) as
new channel selection methods. Using these methods the 28 homogeneous pairs of
EEG channels is reduced to 4 and 2 channel pairs respectively. These methods
significantly reduce the number of homogeneous pair of channels to be used for
emotion detection. This approach is illustrated with 5 distinct emotions
(disgust, happy, surprise, sad, and fear) on 63 channels EEG data recorded
from 5 healthy subjects. In this study, we used Multi-Resolution Analysis (MRA)
based feature extraction the original and reduced set of channels for emotion
classification. These approaches were empirically evaluated by using a simple
unsupervised classifier, Fuzzy C-Means clustering with variable clusters. The
paper concludes by discussing the impact of reduced channels on emotion
recognition with larger number of channels and outlining the potential of the
new channel selection method.
Keywords: EEG, Human
Emotions, Asymmetric Ratios, Channel selection, Wavelet Transform, Fuzzy
C-Means (FCM) clustering
Title of the
Paper:
Changes
in Fluctuation Waves in Coherent Airflow Structures with Input Perturbation
DOWNLOAD
FULL PDF
Authors:
Osama A. Marzouk
Abstract: We predict the
development and propagation of the fluctuations in a perturbed
ideally-expanded air jet. A non-propagating harmonic perturbation in the
density, axial velocity, and pressure is introduced at the inflow with
different frequencies to produce coherent structures in the airflow, which are
synchronized with the applied frequency. Then, the fluctuations and acoustic
fields are predicted for each frequency by integrating the axisymmetric
linearized Euler equations. We investigate the effect of the perturbation
frequency on different characteristics of the pressure and velocity waves. The
perturbation frequency can be used to alter the propagation pattern and
intensity of the waves and mitigate the noise levels at certain zones.
Keywords:
Perturbation, Fluctuation, Waves, Acoustic, Coherent, Air, Jet, Synchronized
Issue
11, Volume 4, November 2008
Title of the
Paper:
Perceptible
Content Retrieval in DCT Domain and Semi-Fragile Watermarking Technique for
Perceptible Content Authentication
DOWNLOAD
FULL PDF
Authors:
Chamidu Atupelage, Koichi Harada
Abstract: Digital
watermarking was commenced to copyright protection and ownership verification
of multimedia data. However the evolution of the watermark focused on
different security aspects of multimedia data such as integrity and
authenticity. Fragile and semi-fragile watermarking schemes were introduced to
accomplish these requirements. In this paper, we propose semi-fragile
watermarking scheme to authenticate visual content of the image. The proposed
method is carried out in DCT domain and authenticating the particular number
of low frequency coefficient, it achieves the integrity of the image. Since
low frequency coefficients carry the essence of the visual data,
authenticating only the low frequency data is adequate. Therefore the proposed
technique is efficient than the others, which are processing in all DCT
coefficients. Digital signature is generated by following the definition of
elliptic curve digital signature algorithm (ECDSA). Thus, the scheme is faster
than RSA based authentication definition and it provides high level of
security and availability. Additionally our scheme localizes the compromised
area of the image. Embedded and visual data are protected from quantization of
JPEG by altering the quantization table. However the degradation of
compression ratio due to alternation in quantization table has been evaluated.
Experimental results show that the watermark does not make any visual artifact
in original data and it gives evidence that compression ratio degradation is
ignorable for average JPEG quality factors.
Keywords:
Semi-fragile watermarking, public key cryptography, discrete cosine
transformation, image authentication, imperceptibility, elliptic curve digital
signature algorithm, JPEG
Title of the
Paper:
Bispectral
Resolution and Leakage Effect of the Indirect Bispectrum Estimate for
Different Types of 2D Window Functions
DOWNLOAD
FULL PDF
Authors:
Teofil-Cristian Oroian, Constantin-Iulian Vizitiu, Florin Serban
Abstract: An important
practical problem in the area of the higher-order statistical signal
processing is to estimate the cumulants and polyspectra of the analyzed signal
when a finite sequence of time samples is available. We cannot use the
theoretical formula because they are based on the assumption that an infinite
sequence of time samples is available, but this is not true in practice. In
order to obtain a better estimate for bispectrum of the signal, different
types of 2D window functions are used. Also, these windows are investigated in
terms of the resolution and leakage effect of the indirect bispectrum
estimate.
Keywords:
Higher-order statistics, bispectrum estimation, 2D window functions,
bispectral resolution
Title of the
Paper:
Automatic
Sea Floor Characterization based on Underwater Acoustic Image Processing
DOWNLOAD
FULL PDF
Authors:
Cristian Molder, Mircea Boscoianu, Mihai I. Stanciu, Iulian C. Vizitiu
Abstract: Automatic sea floor
characterization is mainly based on the signal or image processing of the data
acquired using an active acoustic system called sediment sonar. Each
processing method suits a specific type of sonar, such as the monobeam, the
multibeam, or the side-scan sonar. Most types of sonar offer a two dimensional
view of the sea floor surface. Therefore, a high resolution image results
which can be further analyzed. The inconvenient is that the sonar cannot view
inside of the sea floor for a deeper analysis. Therefore, lower frequency
acoustic systems are used for in-depth sea floor penetration (boomer, sparker,
airguns or sub-bottom profilers). In this case, a mono dimensional signal
results. Previous studies on the low-frequency systems are mainly based on the
visual inspection by a geological human expert. To automatize this process, we
propose the use of feature sets based on the transposed expert fuzzy
reasoning. Two features are extracted, the first based on the sea floor
contour and the second based on the sub-bottom sediment texture.
Keywords:
Sedimentology, Underwater Acoustics, Pattern Recognition, Image Processing,
Textures, Wavelets
Issue
12, Volume 4, December 2008
Title of the
Paper:
On
the Use of Kalman Filter for Enhancing Speech Corrupted by Colored Noise
DOWNLOAD
FULL PDF
Authors:
Boubakir Chabane, Berkani Daoued
Abstract: Kalman filtering is
a powerful technique for the estimation of the speech signal observed in
additive background noise. This paper presents a contribution in the
enhancement of noisy speech with white and colored noise assumption. Some
tests were performed with ideal filter parameters, others using the
Expectation Maximization (EM) algorithm to iteratively estimate the spectral
parameters of the speech and noise. Simulation results show that the
application has the best performance evaluated with objective quality scores,
observation of the waveforms, as well as informal listening tests in the case
of Noizeus database.
Keywords: Speech
enhancement, Kalman filtering, colored noise, EM algorithm
Title of the
Paper:
Modular
Design and Implementation of FPGA-based Tap-selective Maximum-likelihood
Channel Estimator
DOWNLOAD
FULL PDF
Authors:
Jeng-Kuang Hwang, Yuan-Ping Li
Abstract: The modular design
of the optimal tap-selective maximum-likelihood (TSML) channel estimator based
on field-programmable gate array (FPGA) technology is studied. A novel range
reduction algorithm is included in the natural logarithmic function (NLF)
emulator based on the coordinate rotation digital computer (CORDIC)
methodology and is integrated into the TSML channel estimator system. The
low-complexity TSML algorithm, which is employed for sparse multipath channel
estimation, is proposed for long-range broadband block transmission systems.
Furthermore, the proposed range reduction algorithm aims to solve the limited
interval problem in the CORDIC algorithm base on Xilinx’s SG platforms. The
modular approach facilitates the reuse of modules.
Keywords: Coordinate
rotation digital computer (CORDIC), FPGA design, Maximum-likelihood channel
estimation, Range reduction, Logarithm function, Parallel sorting
Title of the
Paper:
A
Hybrid Noise Cancelling Algorithm with Secondary Path Estimation
DOWNLOAD
FULL PDF
Authors:
Edgar Lopez-Caudana, Pablo Betancourt, Enrique Cruz, Mariko Nakano-Miyatake,
Hector Perez-Meana
Abstract: This paper presents
a hybrid active noise canceling (HANC) algorithm to overcome the acoustic
feedback present in most ANC system, together with an efficient secondary path
estimation scheme. The HANC system provides a solution of two fundamental
problems present in these kind of ANC systems: The first consists in a
reduction of the acoustic feedback from the cancellation loudspeaker to the
input microphone, using two FIR adaptive filters, one with a feedforward
configuration an the other with a feedback adaptive filter configuration. To
overcome the secondary path modeling problem, a modification of the method
proposed by Akhtar is used. Computer simulation results are provide to show
the noise cancellation and secondary path estimation performance of presented
scheme.
Keywords: Active
noise canceling, secondary path estimation, feed-forward ANC, feedback ANC,
FxLMS, hybrid structure, Akhtar method
|
|
|