WSEAS Transactions on Computers

2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | Pre-2008

Print ISSN: 1109-2750
E-ISSN: 2224-2872

Volume 11, 2012

Issue 1, Volume 11, January 2012

Title of the Paper: Data Mining QFD for The Dynamic Forecasting of Life Cycle under Green Supply Chain

Authors: Chih-Hung Hsu, An-Yuan Chang, Hui-Ming Kuo

Abstract: The satisfaction of customer requirements is critical issue for the computer designers and manufacturers, because computer design is a high risk and value-added technology. When considering green design, designers should incorporate the voices from the customers and because they are the driving force. On the other hand, data mining from large marketing database has been successfully applied in a number of advanced fields. However, little study has been done in the quality function deployment of identifying future customer requirements for computer design and manufacture, using data mining. This study uses data mining cycle in QFD to forecast future customer requirements for green design of life cycle. The use of time series-based data mining cycle to predict the weights is advantageous because it can (1) find the future trend of customer requirements; (2) provide the computer designers and manufacturers with reference points to satisfy customer requirements in advance. The results of this study can provide an effective procedure of identifying the trends of customer requirements and enhance dynamic forecasting of life cycle under green supply chain in the computer marketplace.

Keywords: Data mining, Quality function deployment, Customer requirements, Dynamic forecasting, Life cycle, Green supply chain

Title of the Paper: Efficient Binary Tree Multiclass SVM Using Genetic Algorithms for Vowels Recognition

Authors: Boutkhil Sidaoui, Kaddour Sadouni

Abstract: In this paper we introduce and investigate the performance of a simple framework for multiclass problems of support vector machine (SVM), we present a new architecture named EBTSVM (Efficient Binary Tree Multiclass SVM), in order to achieve high classification efficiency for multiclass problems. The proposed paradigm builds a binary tree for multiclass SVM by genetic algorithms with the aim of obtaining optimal partitions for the optimal tree. Our approach is more accurate in the construction of the tree. Further, in the test phase EBTSVM, due to its Log complexity, it is much faster than other methods in problems that have big class number. In the context of phonetic classification by EBTSVM machine, a recognition rate of 57.54%, on the 20 vowels of TIMIT corpus was achieved. These results are comparable with the state of the arts, in particular the results obtained by SVM with one-versus-one strategy. In addition, training time and number of support vectors, which determine the duration of the tests, are also reduced compared to other methods. However, these results are unacceptably large for the speech recognition task. This calls for the development of more efficient multi-class kernel methods in terms of accuracy and sparsity.

Keywords: Machine Learning, SVM, Binary Tree, Genetic algorithms, Speech recognition

Title of the Paper: Locating Services in Legacy Software: Information Retrieval Techniques, Ontology and FCA Based Approach

Authors: Mostefai Abdelkader, Malki Mimoun, Boudchiha Djeloul

Abstract: The localisation and identification of services in legacy software is the most challenging task in the process of Migrating (i.e. reengineering) legacy software towards service oriented architectures (i.e. SOA) and web services technologies. This paper propose an approach to locate services in legacy software by means of information retrieval techniques, WORDNET ontology, FCA (i.e. Formal Concepts Analysis) and the analysis of the legacy interfaces of the software. In this approach interfaces are analysed to generate queries for each service to be located , the WORDNET ontology is used to expand the queries terms to make best coverage of the Modules (e.g. the part of source code : procedures /functions) implied in the computation of the service, IR (i.e. Information Retrieval) techniques (e.g. vector space model, Latent semantic analysis ) are used to map queries to the relevant modules of the legacy software, presented as ranked list (i.e. search space), this list represent the parts of source code that participate in the computation of the service. This process is repeated for each service and the results are exploited by the FCA techniques to reduce the search space time spent by developer when examining the result to decide in real parts that contribute to the computation of each service.

Keywords: Migrating, Legacy software, FCA, Ontology, Information Retrieval Techniques, Interfaces, Web Services

Issue 2, Volume 11, February 2012

Title of the Paper: Building a Book Recommender System Using Time Based Content Filtering

Authors: Chhavi Rana, Sanjay Kumar Jain

Abstract: Recommender System are new generation internet tool that help user in navigating through information on the internet and receive information related to their preferences. Although most of the time recommender systems are applied in the area of online shopping and entertainment domains like movie and music, yet their applicability is being researched upon in other area as well. This paper presents an overview of the Recommender Systems which are currently working in the domain of online book shopping. This paper also proposes a new book recommender system that combines user choices with not only similar users but other users as well to give diverse recommendation that change over time. The overall architecture of the proposed system is presented and its implementation with a prototype design is described. Lastly, the paper presents empirical evaluation of the system based on a survey reflecting the impact of such diverse recommendations on the user choices.

Keywords: Recommender system, Collaborative filtering, Content filtering, Data mining, Time, Book

Title of the Paper: Neural Network Modeling for an Intelligent Recommendation System Supporting SRM for Universities in Thailand

Authors: Kanokwan Kongsakun, Chun Che Fung

Abstract: In order to support the academic management processes, many universities in Thailand have developed innovative information systems and services with an aim to enhance efficiency and student relationship. Some of these initiatives are in the form of a Student Recommendation System (SRM). However, the success or appropriateness of such system depends on the expertise and knowledge of the counselor. This paper describes the development of a proposed Intelligent Recommendation System (IRS) framework and experimental results. The proposed system is based on an investigation of the possible correlations between the students’ historic records and final results. Neural Network techniques have been used with an aim to find the structures and relationships within the data, and the final Grade Point Averages of freshmen in a number of courses are the subjects of interest. This information will help the counselors in recommending the appropriate courses for students thereby increasing their chances of success.

Keywords: Data Mining, Neural Network, Student Relationship Management, Intelligent Recommendation System

Title of the Paper: Reliability of Component Based Systems – A Critical Survey

Authors: Kirti Tyagi, Arun Sharma

Abstract: Software reliability is defined as the probability of the failure free operation of a software system for a specified period of time in a specified environment. Day by day software applications are growing more complex and with more emphasis on reuse. Component Based Software (CBS) applications have emerged. The focus of this paper is to provide an overview for the state of the art of Component Based Systems reliability estimation. In this paper, we discussed various approaches in terms of their scope, model, methods, technique and validation scheme. This comparison provides insight into determining the direction of future CBS reliability research.

Keywords: Reliability, Failure, Markov Model, Component Based Systems, Software Architecture, Reliability Model, State Based Model, Path Based Model, Additive Model, Scenario, Operational Profile, Component Dependency Graph, Component, Failure Behavior, Non Homogeneous Poisson Process, Discrete Time Markov Chain

Issue 3, Volume 11, March 2012

Title of the Paper: Source Anonymization Using Modified New Variant ElGamal Signature Scheme

Authors: Jhansi Vazram B., Valli Kumari V., Murthy J. V. R.

Abstract: Mobile ad hoc networks (MANETs) have distinct features: like dynamic nodes, changing topologies, nodes cooperation and open communication media. Anonymity of message contents and participants is the most concerned task in MANET communication. Most of the existing methods face a challenge due to heavy cryptographic computation with high communication overheads. In this paper we propose an unconditionally secure privacy preserving message authentication scheme (PPMAS), which uses Modified New variant ElGamal signature Scheme (MNES). This scheme enables a sender to transmit messages, providing authentication along with anonymity, without relying on any trusted third parties. It also allows the untraceability of the link between the identifier of a node and its location. The experimental analysis of the proposed system is presented.

Keywords: Network security, Anonymity, Privacy, Mobile ad hoc networks, PPMAS, MNES

Title of the Paper: Novel Intuitionistic Fuzzy C-Means Clustering for Linearly and Nonlinearly Separable Data

Authors: Prabhjot Kaur, A. K. Soni, Anjana Gosain

Abstract: This paper presents a robust Intuitionistic Fuzzy c-means (IFCM-σ) in the data space and a robust kernel Intutitionistic Fuzzy C-means (KIFCM-σ) algorithm in the high-dimensional feature space with a new distance metric to improve the performance of Intuitionistic Fuzzy C-means (IFCM) which is based upon intuitionistic fuzzy set theory. IFCM considered an uncertainty parameter called hesitation degree and incorporated a new objective function which is based upon intutionistic fuzzy entropy in the conventional Fuzzy C-means. It has shown better performance than conventional Fuzzy C-Means. We tried to further improve the performance of IFCM by incorporating a new distance measure which has also considered the distance variation within a cluster to regularize the distance between a data point and the cluster centroid. Experiments are done using two-dimensional synthetic data-sets, Standard data-sets referred from previous papers. Results have shown that proposed algorithms, especially KIFCM-σ is more effective for linear and nonlinear separation.

Keywords: Fuzzy Clustering, Intuitionistic Fuzzy C-Means, Robust Clustering, Kernel Intuitionistic fuzzy cmeans, Distance metric, Fuzzy c-means

Title of the Paper: Integrating SOA and Cloud Computing for SME Business Objective

Authors: Ashish Seth, Himanshu Agarwal, Ashim Raj Singla

Abstract: There is a need to understand the existing and future state architecture before you begin selecting platforms and technology, Service Oriented Architecture (SOA) and Cloud, together can provide a complete service based solutions for SMEs. Both SOA and cloud deals with delivering services to business with improved agility, increased speed and reduced cost that can lead to greater innovation and effective returns on investment. In this we proposed an architecture that uses SOA principles to create an overall strategic plan and focus how architectural context support the use of cloud computing.

Keywords: SOA, Clouds, Agility, Adhoc model, Architecture

Issue 4, Volume 11, April 2012

Title of the Paper: Dynamic Tabu Search for Dimensionality Reduction in Rough Set

Authors: Zalinda Othman, Azuraliza Abu Bakar, Salwani Abdullah, Mohd Zakree, Ahmad Nazri, Nelly Anak Sengalang

Abstract: This paper proposed a dynamic tabu search (DTSAR) that incorporated a dynamic tabu list to solve an attribute reduction problem in rough set theory. The dynamic tabu list is use to skip the aspiration criteria and to promote faster running times. A number of experiments have been conducted to evalute the performance of the proposed technique with other published metaheuristic techniques, rough sets and decision tree. DTSAR shown promising results on reduct generation time. It ranges between 0.20 minutes to 22.18 minutes. For comparison on the performance on number of reduct produced, DTSAR is on par with other metaheuristic techniques. DTSAR outperforms some techniques on certain dataset. Quality of classification rules generated by adopting DTSAR was comparable with two other methods i.e. Rough Set and Decision Trees.

Keywords: Tabu search, attribute reduction, rough set, computational intelligence, dynamic Tabu list

Title of the Paper: Adaptive Visualization of 3D Meshes Using Localized Triangular Strips

Authors: M.-H. Mousa, M.-K. Hussein

Abstract: 3D meshes are the principal representation of 3D objects. They are very powerful in manipulation and easy to visualize. However, they often require a huge amount of data for storage and/or transmission. In this paper, we present an effective technique to stream triangular meshes using enhanced mesh stripification algorithm. In fact, stripification algorithms are used to speed up the rendering of geometric models because they reduce the number of vertices sent to the graphics pipeline by exploiting the fact that adjacent triangles share an edge. This enables to use our technique to adaptively visualize the 3D models during their transmission. The first step of the proposed technique is based on storing 3D objects as a set of strips. This encodes the geometry and the connectivity of the input model in a robust fashion. The stripification algorithm achieves compression ratios above 61 : 1 over ASCII encoded formats. Second, the strips are directly sent to the rendering pipeline in accordance with the viewpoint direction resulting in faster transmission and rendering of complex graphical objects. Binary space partitioning, kd-trees, is used to enhance our stripification algorithm. Some examples are given to demonstrate the effectiveness of our technique.

Keywords: triangle strips, adaptive visualization, kd-tree partitioning, mesh compression

Title of the Paper: Adjusted Artificial Bee Colony (ABC) Algorithm for Engineering Problems

Authors: Milan Tuba, Nebojsa Bacanin, Nadezda Stanarevic

Abstract: In this paper we present a modified algorithm which integrates artificial bee colony (ABC) algorithm with adaptive guidance adjusted for engineering optimization problems. The novel algorithm speeds up the convergence and improves the algorithm’s exploitation/exploration balance. Even though scout bee phase is used for exploration, we introduced adaptive parameter that at different stages of the algorithm narrows search space facialiting faster convergence. We tested our algorithm on four standard engineering benchmark problems. The experimental results show that our modified algorithm can outperform the pure ABC algorithm in most cases.

Keywords: Artificial bee colony, Constrained optimization, Swarm intelligence, Metaheuristic optimization

Issue 5, Volume 11, May 2012

Title of the Paper: A Distributed Shared Memory Cluster Architecture With Dynamic Load Balancing

Authors: Minakshi Tripathy, C. R. Tripathy

Abstract: This paper proposes a distributed shared memory cluster architecture with load balancing. The architecture is based on dynamic task scheduling approach for distribution and assignment. It enhances the performance of communication across clusters for data access. The proposed dynamic load balancing model uses the concept of work stealing, which intelligently balances the load among different nodes. The work stealing consistently provides higher system utilization when many jobs are running with varying characteristics. This results in efficient use of the system. The performance analysis shows the proposed architecture to outperform the previously proposed distributed shared memory clusters in terms of scalability and efficiency.

Keywords: Block Data Layout, Data Locality, Task Distribution, Master-Slave Paradigm, Work Stealing

Title of the Paper: On Performance Analysis of Hybrid Intelligent Algorithms (Improved PSO with SA and Improved PSO with AIS) with GA, PSO for Multiprocessor Job Scheduling

Authors: K. Thanushkodi, K. Deeba

Abstract: Many heuristic-based approaches have been applied to finding schedules that minimize the execution time of computing tasks on parallel processors. Particle Swarm Optimization is currently employed in several optimization and search problems due its ease and ability to find solutions successfully. A variant of PSO, called as Improved PSO has been developed in this paper and is hybridized with the AIS to achieve better solutions. This approach distinguishes itself from many existing approaches in two aspects In the Particle Swarm system, a novel concept for the distance and velocity of a particle is presented to pave the way for the job-scheduling problem. In the Artificial Immune System (AIS), the models of vaccination and receptor editing are designed to improve the immune performance. The proposed hybrid algorithm effectively exploits the capabilities of distributed and parallel computing of swarm intelligence approaches. The hybrid technique has been employed, inorder to improve the performance of improved PSO. This paper shows the application of hybrid improved PSO in Scheduling multiprocessor tasks. A comparative performance study is discussed for the intelligent hybrid algorithms (ImPSO with SA and ImPSO with AIS). It is observed that the proposed hybrid approach using ImPSO with AIS gives better results than intelligent hybrid algorithm using ImPSO with SA in solving multiprocessor job scheduling.

Keywords: PSO, Improved PSO, Simulated Annealing, Hybrid Improved PSO, Artificial Immune System (AIS), Job Scheduling, Finishing time, waiting time

Title of the Paper: Hand Written English Character Recognition Using Column-wise Segmentation of Image Matrix (CSIM)

Authors: Rakesh Kumar Mandal, N. R. Manna

Abstract: Research is going on to develop both hardware and software to recognize handwritten characters easily and accurately. Artificial Neural Network (ANN) is a very efficient method for recognizing handwritten characters. Attempts have already been made to recognize English alphabets using similar type of methods. A new method has been tried, in this paper, to improve the performance of the previously applied methods. The input image matrix is compressed into a lower dimension matrix in order to reduce non significant elements of the image matrix. The compressed matrix is segmented column-wise. Each column of a particular image matrix is mapped to identical patterns for recognizing a particular character. Majority of a known pattern decides the existence of a particular character.

Keywords: ANN, CSIM, Compression, Perceptron, Segmentation, Learning Rule

Issue 6, Volume 11, June 2012

Title of the Paper: Development of CAD Algorithms for Bezier Curves/Surfaces Independent of Operating System

Authors: Yogesh Kumar, S. K. Srivastava, A. K. Bajpai, Neeraj Kumar

Abstract: Most of the CAD software, which are available currently, works only on the operating system (generally windows) for which they are designed. Alternatively, the commercial CAD software is dependent upon the operating system. If CAD software is designed such that it is Independent of the operating system, then such CAD software will be much beneficial for the present scenario of the CAD softwares and it will be independent of the operating system. Now-a-days most of the commercial software use application programming interfaces (APIs) which provide libraries of common graphics operations that allow developers to incorporate many more realistic effects into their applications. But the CAD software is dependent on the Operating System, which is the major drawback of the software. There is a need to develop the CAD algorithms independent of operating system so that the same can be used for the development of any CAD software. Keeping this in view, the present work is devoted to the development of CAD algorithm for the Bezier curves and Bezier surfaces. The algorithms are independent of the operating system. The operating system independent graphics library OpenGL has been used for the development of these CAD algorithms.

Keywords: Operating System Independent, CAD Algorithms, Bezier Curves/Surfaces etc

Title of the Paper: Parallel Particle Swarm Optimization on Graphical Processing Unit for Pose Estimation

Authors: Vincent Roberge, Mohammed Tarbouchi

Abstract: In this paper, we present a parallel implementation of the Particle Swarm Optimization (PSO) on GPU using CUDA. By fully utilizing the processing power of graphic processors, our implementation provides a speedup of 215x compared to a sequential implementation on CPU. This speedup is significantly superior to what has been reported in recent papers and is achieved by a few simple optimizations we made to better adapt the parallel algorithm to the specific architecture of the NVIDIA GPU. Next, we apply our parallel PSO to the problem of 3D pose estimation of a bomb in free fall. We reduce the computation time of the analysis of 120 images to about 1 s, representing a speedup of 140x compared to the sequential version on CPU.

Keywords: CUDA, graphic processing units, particle swarm optimization, parallel implementation, 3D pose estimation

Title of the Paper: WML Detection of Brain Images Using Fuzzy and Possibilistic Approach in Feature Space

Authors: M. Anitha, P. Tamije Selvy, V. Palanisamy

Abstract: White matter lesions are small areas of dead cells found in parts of the brain that act as connectors are detected using magnetic resonance imaging (MRI) which has increasingly been an active and challenging research area in computational neuroscience. This paper presents new image segmentation models for automated detection of white matter changes of the brain in an elderly population. The main focus is on unsupervised clustering algorithms. Clustering is a method for dividing scattered groups of data into several groups. It is commonly viewed as an instance of unsupervised learning. In machine learning, unsupervised learning refers to the problem of trying to find hidden structures in unlabeled data. Unsupervised clustering models, Fuzzy c-means clustering, Geostatistical Fuzzy c-means clustering and Geostatistical Possibilistic clustering algorithms partition the dataset into clusters according to some defined distance measure. The Region of Interest (ROI) is then extracted on the membership map. Much more accurate results are obtained by GFCM, which better localized the large regions of WMLs when compared to FCM.

Keywords: Fuzzy clustering, geostatistics, image segmentation, magnetic resonance imaging, possibilistic clustering, white matter changes

Issue 7, Volume 11, July 2012

Title of the Paper: A Hybrid Approach for Detecting, Preventing, and Traceback DDoS Attacks

Authors: Ali E. El-Desoky, Marwa F. Aread, Magdy M. Fadel

Abstract: The main objective of this study is to design a hybrid technique to defend against the DDoS attack. Distributed Denial of Service (DDoS) attacks constitute one of the major threats and among the hardest security problems in today's Internet. With little or no advance warning, a DDoS attack can easily exhaust the computing and communication resources of its victim within a short period of time. A network simulation program NS2 will be applied to test the efficiency of the proposed technique in filtering out all the attack packets, and traceback them to their sources. Many criterias will be used to prove the efficiency of the proposed technique, one of them is the ratio of the dropped packets, the second is the ratio of the passed legal packets, and finally, the accuracy of determining the actual source of the attack packets. Applying these techniques will enhance and increase the efficiency in preventing the success of these DDoS attacks.

Keywords: DDoS attacks, Firewall, Bloom filter, Packet marking, Packet logs, Packet tracing

Title of the Paper: Design and Implementation of an Efficient SCA Core Framework for a DSP Platform

Authors: Wael A. Murtada, Mohamed M. Zahra, Magdi Fikri, Mohamed I. Yousef, Salwa El-Ramly

Abstract: The Software Communications Architecture (SCA) was developed to improve software reuse and interoperability in Software Defined Radios (SDR). However, there have been performance concerns since its conception. Arguably, the majority of the problems and inefficiencies associated with the SCA can be attributed to the assumption of modular distributed platforms relying on General Purpose Processors (GPPs) to perform all signal processing. Significant improvements in cost and power consumption can be obtained by utilizing specialized and more efficient platforms. Digital Signal Processors (DSPs) present such a platform and have been widely used in the communications industry. Improvements in development tools and middleware technology opened the possibility of fully integrating DSPs into the SCA. This approach takes advantage of the exceptional power, cost, and performance characteristics of DSPs, while still enjoying the flexibility and portability of the SCA. This paper presents the design and implementation of an SCA Core Framework (CF) for a TI TMS320C6416 DSP. The framework is deployed on a C6416 Device Cycle Accurate Simulator and TI C6416 Development board. The SCA CF is implemented by leveraging OSSIE, an open-source implementation of the SCA, to support the DSP platform. OIS’s ORBExpress DSP and DSP/BIOS are used as the middleware and operating system, respectively. A sample waveform was developed to demonstrate the framework’s functionality. Benchmark results for the framework and sample applications are provided. Benchmark results show that, using OIS ORBExpress DSP ORB middleware has an impact for decreasing Memory Footprint and increasing the Performance compared with PrismTech's e*ORB middleware.

Keywords: Software Communications Architecture (SCA), Software Defined Radio (SDR), Digital Signal Processors, Embedded Object Request Broker (ORB)

Title of the Paper: Re-Optimizing the Performance of Shortest Path Queries Using Parallelized Combining Speedup Technique based on Bidirectional Arc Flags and Multilevel Approach

Authors: R. Kalpana, P. Thambidurai

Abstract: Globally shortest path problems have increasing demand due to voluminous datasets in applications like roadmaps, web search engines, mobile data sets, etc., Computing shortest path between nodes in a given directed graph is a very common problem. Among the various shortest path algorithms, Dijkstra’s shortest path algorithm [1] is said to have better performance with regard to run time than the other algorithms. The output of Dijkstra’s shortest path algorithm can be improved with speedup techniques. In this paper a new combined speedup technique based on three speedup techniques were combined and each technique is parallelised individually and the performance of the combination is measured with respect to pre-processing time, runtime and number of nodes visited in random graphs, planar graphs and real world data sets.

Keywords: Bidirectional Arcflags, Multilevel method, Multilevel Arcflags, Parallelized Multilevel Arcflags

Title of the Paper: Capturing Form of Non-verbal Conversational Behavior for Recreation on Synthetic Conversational Agent EVA

Authors: Izidor Mlakar, Matej Rojc

Abstract: Several features of human-human conversation have to be accounted for in order to recreate conversational behavior on a synthetic model, as natural as possible.. Spontaneous conversations are a combination of multiple modalities (e.g. gestures, postures, gazes, expressions) in order to effectively convey information between participants. This paper presents a novel process for capturing the forms of motion performed during spontaneous conversations. Furthermore, it also addresses the process of transforming the captured motions’ descriptions into high-resolution, expressively transformable behavioral scripts. The aim of the research was design a process that will allow building a highresolution motion dictionary. The dictionary is to be presented as a set of expressively transformable behavioral scripts, each capturing the expressive details from a spontaneous conversation (e.g. spatial, repetitive, structural, and temporal features).

Keywords: Multimodal annotation, conversational behavior, high-resolution annotation, motion dictionary, analysis of non-verbal behavior

Issue 8, Volume 11, August 2012

Title of the Paper: Communication Complexities: A Software Construction Perspective

Authors: Lipeng Yong

Abstract: Half of a century has gone by yet software crisis endure. Many have thought that its Achilles heel is in the ways of constructing software or in the ways of managing the software construction or both thus they attack it vigorously without reserve like Hercules whacking off the Hydra’s heads. This paper instead casted an eye on the communication among the software construction team members by first dissecting it into six levels of complexity from individual to Internet communication. Secondly communication was tackled mathematically and the formulas have been further simplified to some rules of thumb.

Keywords: Software Crisis, Software Engineering, Communication Complexity, Reflectivity, Responsivity, Standish Report

Title of the Paper: Identification of Noisy Speech Signals using Bispectrum-based 2DMFCC and Its Optimization through Genetic Algorithm as a Feature Extraction Subsystem

Authors: Benyamin Kusumoputro, Agus Buono, Lina

Abstract: Power-spectrum-based Mel-Frequency Cepstrum Coefficients (MFCC) is usually used as a feature extractor in a speaker identification system. This one-dimensional feature extraction subsystem, however, shows low recognition rates for identifying utterance speech signals under harsh noise conditions. In this paper, we have developed a speaker identification system based on Bispectrum data that is more robust to the addition of Gaussian noise. As one-dimensional MFCC method could not be directly used to process the twodimensional Bispectrum data, we proposed a two-dimensional MFCC method and its optimization using Genetic Algorithm (GA). Experiments using the two-dimensional MFCC method as the feature extractor and a Hidden Markov Model as the pattern classifier on utterance speeches contained with various levels of Gaussian noise are conducted. Results showed that the developed system performed higher recognition rates compare with that of 1D-MFCC method, especially when the 2D-MFCC with GA optimization method is utilized.

Keywords: Speaker Identification System, 2D Mel-Frequency Cepstrum Coefficients, Bispectrum, Hidden Markov Model, Genetics Algorithms

Title of the Paper: Research and Implementation of Peer-to-Peer Network Topology based on Balanced Binary Tree

Authors: Zong Hu

Abstract: With the quick development of network, how to locate or lookup the resources in peer-to-peer network becomes a hot spot. Having analyzed disadvantages of current peer-to-peer networks, AVLNet, a new peer-to-peer network, is set forth. AVLNet topologizes peer-to-peer overlay network as a balanced binary tree. Each node in AVLNet only holds the information of its parent and children, which solves the problems caused by status parameters in unstructured network. What’s more, AVLNet also weakens the relationship of nodes as in structured network, which solves the problems of frequent hashing. The paper designs peer checking in, checking out and searching strategy of AVLNet in algorithm, and implements it based on JXTA platform, which proofs the correctness and feasibility of AVLNet network both in theory and practice. What's more, it simulates AVLNet, Gnutella and Chord by MatLab, compares the performance of inexact searching in three networks and shows the advantages of AVLNet.

Keywords: Peer-to-Peer Network; JXTA; Balanced Binary Tree (AVL); Searching Algorithm

Title of the Paper: The ChipCflow Project to Accelerate Algorithms using a Dataflow Graph in a Reconfigurable System

Authors: Antonio Carlos Fernandes Da Silva, Bruno De Abreu Silva, Joelmir Jose Lopes, Jorge Luiz E Silva

Abstract: In this paper, the ChipCflow Project to accelerate algorithms using a design of a field programmable gate array (FPGA) as a prototype of a static dataflow architecture is presented. The static dataflow architecture using operators interconnected by parallel buses was implemented. Accelerating algorithms using a dataflow graph in a reconfigurable system shows the potential for high computation rates. The results of benchmarks implemented using the static dataflow architecture are reported at the end of this paper.

Keywords: Accelerating algorithms, Reconfigurable Computing, Static Dataflow Graph, Modules C to VHDL

Issue 9, Volume 11, September 2012

Title of the Paper: Geometry, Duality and Robust Computation in Engineering

Authors: Vaclav Skala

Abstract: Robustness of computations in engineering is one of key issues as it is necessary to solve technical problems leading to ill conditioned solutions. Therefore the robustness and numerical stability is becoming a key issue more important that the computational time. In this paper we will show selected computational issues in numerical precision, well known cases of failures in computations. The Euclidean representation is used in today’s computations, however the projective space (an extension of the Euclidean space) representation leads to more compact and robust formulations and to matrix-vector operations supported in hardware, e.g. by GPU.

Keywords: Euclidean space, projective space, homogeneous coordinates, duality, intersections, barycentric coordinates, planes intersection, Plucker coordinates, numerical precision.

Title of the Paper: A Reusable and Interoperable Semantic Classification Tool Which Integrates Owl Ontology

Authors: Saadia Lgarch, Mohammed Khalidi Idrissi, Samir Bennani

Abstract: In e-Learning systems, tutor plays a very important role to support learners, and guarantee a learning of quality. A successful collaboration between learners and their tutor requires the use of communication tools. Thanks to their flexibility in terms of time, the asynchronous tools as discussion forum are the most used. However this type of tools generates a great mass of messages making tutoring an operation complex to manage, hence the need of a classification tool of messages. We proposed in a first step a semantics classification tool, which is based on the LSA and thesaurus. The possibility that ontology provides to overcome the limitations of the thesaurus encouraged us to use it to control our vocabulary. By the way of our proposed selection algorithm, the OWL ontology is queried to generate new terms which are used to build the LSA matrix. The integration of formal OWL ontology provides a highly relevant semantic classification of messages, and the reuse by other applications of ontological knowledge base is also guaranteed. The interoperability and the knowledge exchange between systems are also ensured by ontology integrated. In order to ensure its reuse and interoperability with systems which requesting for its service of classification, the implementation of our semantic classifier tool basing on the SOA is adopted and it will be explained and tested in this work.

Keywords: E-learning, tutoring, ontology, discussion forum, semantic classification, SOA, reuse, interoperability, web service, orchestration

Title of the Paper: Website Development and Web Standards in the Ubiquitous World: Where Are We Going?

Authors: Serena Pastore

Abstract: A website is actually the first indispensable tool for information dissemination, but its development embraces various aspects: the choice of Content Management System (CMS) to implement production and the publishing process, the provision of relationships and interconnections with social networks and public web services, and the reuse of information from other sources. Meanwhile, the context where web content is displayed has dramatically changed with the advent of mobile devices that are very different in terms of size and features, which have led to a ubiquitous world made of mobile Internet-connected devices. Users’ behavior on mobile web devices means that developers need new strategies for web design. The paper analyzes technologies, methods, and solutions that should be adopted to provide a good user experience regardless of the devices used to visualize the content in its various forms (i.e., web pages, web applications, widgets, social applications, and so on), assuring that open standards are adhered to, and thus providing wider accessibility to users. The strategies for website development cover responsive technology vs. mobile apps, the interconnection with social networks and web services in the Google world, the adoption of embedded data techniques based on specifications (i.e., RDFa, microformats and, microdata), and specific vocabularies (i.e.,

Keywords: CMS, responsive design, mobile apps, social network, Google world, RDFa, microdata, microformats,, Facebook OG

Title of the Paper: SPEM 2.0 Extension for Pervasive Information Systems

Authors: José E. Fernandes, Ricardo J. Machado

Abstract: Pervasive computing is a research field of computing technology that aims to achieve a new computing paradigm. In this paradigm, the physical environment has a high degree of pervasiveness and availability of computers and other information technology (IT) devices, usually with communication capabilities. Pervasive Information Systems (PIS), composed by these kinds of devices, bring issues that challenge software development for them. Model-Driven Development (MDD), strongly focusing and relying on models, has the potential to allow: the use of concepts closer to the domain and the reduction of semantic gaps; higher automation and lower dependency to technological changes; higher capture of expert knowledge and reuse; an overall increased productivity. Along with the focus and use of models, software development processes are fundamental to efficient development efforts of successful software systems. For the description of processes, Software & Systems Process Engineering Meta-Model Specification (SPEM) is the current standard specification published by the Object Management Group (OMG). This paper presents an extension to SPEM (version 2.0) Base Plug-In Profile that includes stereotypes needed to support a suitable structural process organization for MDD approaches aiming to develop software for PIS. A case study is provided to evaluate the applicability of the extension.

Keywords: MDD, PIS, SPEM, pervasive, ubiquitous, software engineering, process, information systems

Issue 10, Volume 11, October 2012

Title of the Paper: Classification and Evaluation of Document Image Retrieval System

Authors: Reza Tavoli

Abstract: Document Images are documents that normally begin on paper and are then via electronics scanned. These documents have rich internal structure and might only be available in image form. Supplementally, they may have been created by a union of printing technologies (or by handwriting); and include diagrams, tables, graphics and other non-textual component. Large collections of such complex documents are commonly found in legal investigation. Many approaches come in for indexing and retrieval document images. In this paper we proposed a framework for classify document image retrieval approaches, and then we evaluated these approaches based on important measures.

Keywords: Document image, retrieval, indexing, information system, query image, machine-print handwriting

Title of the Paper: Fairness in Physical Products Delivery Protocol

Authors: Abdullah Mohammed Alaraj

Abstract: In an e-commerce transaction, a buyer purchases a physical product, such as a laptop, from an online seller. In an attempt to protect himself, any seller would prefer to collect payment from the buyer before he sends the product. Likewise, the buyer would prefer to have the product shipped to him before he makes a payment to the seller. Both parties have a need to take precautions to minimize their risk in case the other party proves to be untrustworthy. This paper proposes a new e-commerce fair exchange protocol based on verifiable and recoverable encryption of keys. The proposed protocol is based on offline TTP. Only seven messages are exchanged between the parties involved in the protocol. Disputes are resolved electronically in case one party evades.

Keywords: Fair exchange protocol; E-commerce; Physical products; Cryptographic protocols; Security protocols

Title of the Paper: Image Sequence Analysis Based on the 3D Relative Potential Inspired by Physical Electro-Static Field

Authors: Xiaodong Zhuang, N. E. Mastorakis

Abstract: A novel three-dimensional relative potential field is proposed for image sequence analysis, which is inspired by the physical electro-static potential. The spatial property of the 3D relative potential is studied, based on which the 3D volume segmentation can be implemented in image sequences. The experimental results of test sequences and real-world video sequences prove that the proposed method provides an effective and convenient way of object segmentation and tracking in video sequences, which can effectively support further analysis and recognition.

Keywords: Image sequence processing, three-dimensional relative potential, electro-statics, video segmentation, object tracking

Title of the Paper: Hardware Modeling of Binary Coded Decimal Adder in FPGA

Authors: Muhammad Ibn Ibrahimy, Md. Rezwanul Ahsan, Iksannurazmi B. Bambang Soeroso

Abstract: There are insignificant relevant research works available which are involved with the Field Programmable Gate Array (FPGA) based hardware implementation of Binary Coded Decimal (BCD) adder. This is because, the FPGA based hardware realization is quiet new and still developing field of research. The article illustrates the design and hardware modeling of a BCD adder. Among the types of adders, Carry Look Ahead (CLA) and Ripple Carry (R-C) adder have been studied, designed and compared in terms of area consumption and time requirement. The simulation results show that the CLA adder performs faster with optimized area consumption. Verilog Hardware Description Language (HDL) is used for designing the model with the help of Altera Quartus II Electronic Design Automation (EDA) tool. EDA synthesis tools make it easy to develop an HDL model and which can be synthesized into target-specific architectures. Whereas, the HDL based modeling provides shorter development phases with continuous testing and verification of the system performance and behavior. After successful functional and timing simulations of the CLA based BCD adder, the design has been downloaded to physical FPGA device. For FPGA implementation, the Altera DE2 board has been used which contains Altera Cyclone II 2C35 FPGA device.

Keywords: Binary Coded Decimal Adder, Carry Look Ahead, Ripple Carry, Hardware Description Language, Field Programmable Gate Array

Issue 11, Volume 11, November 2012

Title of the Paper: Estimation of Algae Growth Model Parameters by a Double Layer Genetic Algorithm

Authors: Artorn Nokkaew, Busayamas Pimpunchat, Charin Modchang, Somkid Amornsamankul, Wannapong Triampo, Darapond Triampo

Abstract: This paper presents a double layer genetic algorithm (DLGA) to improve performance of the information-constrained parameter estimations. When a simple genetic algorithm (SGA) fails, a DLGA is applied to the optimization problem in which the initial condition is missing. In this study, a DLGA is specifically designed. The two layers of the SGA serve different purposes. The two optimizations are applied separately but sequentially. The first layer determines the average value of a state variable as its derivative is zero. The knowledge from the first layer is utilized to guide search in the second layer. The second layer uses the obtained average to optimize model parameters. To construct a fitness function for the second layer, the relative derivative function of the average is combined into the fitness function of the ordinary least square problem as a value control. The result shows that the DLGA has better performance. When missing an initial condition, the DLGA provides more consistent numerical values for model parameters. Also, simulation produced by DLGA is more reasonable values than those produced by the SGA.

Keywords: Algae growth, genetic algorithms, initial values problem, optimization, ordinary differential equations, parameter estimation

Title of the Paper: Learning Algorithm of Kohonen Network with Selection Phase

Authors: Mohamed Ettaouil, Essafi Abdelatifi, Fatima Belhabib, Karim El Moutaouakil

Abstract: The learning algorithm of kohonen network realized away from any notion of class. So, the labelling phase is necessary to find the class associated to data. Generally, the size of the topological map is randomly chosen. Such choice effect the performance of the Kohonen algorithm. Consequently, the labelling phase becomes difficult to realize. To overcome this problem, we add to learning Kohonen algorithm a phase called selection stage. This phase is based on a function called selection function; to construct, we use a sub-set of the data set. In addition, we divided the topological map on two parts: The first contains the used neurons; while the second one is formed by the unused ones. It should be noted that the size of the first and the second parts are modified by the use of the selection function. To compare our method with the classical one, some experiments results are performed.

Keywords: Kohonen Network, Learning Kohonen, Neural architecture of optimization, Kohonen with Selection phase

Title of the Paper: Fuzzy Reasoning-Based Edge Detection method using Multiple Features

Authors: Li Fang, Weiren Shi, Shuhan Chen

Abstract: Edge detection is an indispensable part of image processing. In this paper, a novel edge detection method based on multiple features and fuzzy reasoning is proposed, in which the limitations of gradient-based edge detection methods and present fuzzy edge detection algorithms can be overcome. The new method selects trapezoid fuzzy membership functions, defines multiple features for each pixel from its neighbors, constructs two sets of fuzzy rules and applies fuzzy reasoning process to determine whether the central pixel is an edge point or not. Extensive experimental results demonstrate that the proposed method performs well in keeping low contrast and blurry edge details, noise suppression and fuzzy rules complexity.

Keywords: Edge detection, Multiple features, Fuzzy reasoning, Trapezoid fuzzy membership functions, Noise suppression

Title of the Paper: Energy Efficient Routing with Transmission Power Control based Biobjective Path Selection Model for Mobile Ad-hoc Network

Authors: V. Bhanumathi, R. Dhanasekaran

Abstract: The main aim of this paper is to find an optimal path to prolong the network lifetime and to find energy efficient routes for MANET. Routing involves path discovery based on Received Signal Strength (RSS) and residual energy and selection based on an optimized biobjective model. Biobjective here represents energy and hop. In path discovery process, initially, transmit power required is varied to reduce energy consumption. Then the RSS of the Route Request (RREQ) and node's remaining energy are validated for deciding whether a node can forward the RREQ or not. Selecting the path that consumes less energy and less number of hops extends the network lifetime. Theoretical computation is compared with the simulation results. As far as the simulation is concerned, the results of the proposed model called Energy Efficient Biobjective Path Selection (EE-BPS) are encouraging. The selection process fetches us the energy saving, because of this model. On the other hand, if the optimal path is not considered, the nodes will drain off soon and the network lifetime will decrease.

Keywords: Energy consumption, hop count, mobile adhoc network, network lifetime, path discovery, path selection, received signal strength and transmit power control

Issue 12, Volume 11, December 2012

Title of the Paper: An Optimistic Concurrency Control Approach Applied to Temporal Data in Real-time Database Systems

Authors: Walid Moudani, Nicolas Khoury, Mohamad Hussein

Abstract: Real-time database systems (RTDBS) have received growing attention in recent years. RTDBS is a database system where transactions have explicit timing constraints such as deadlines. The performance and the correctness of RTDBS are highly dependent on how well these deadlines can be met. Scheduling of transactions is driven by priority considerations rather than fairness considerations. Concurrency control is one of the main issues in the studies of RTDBS. Optimistic concurrency control methods have the properties of being non-blocking and deadlock-free which are attractive for RTDBS. Furthermore, in the actual applications, real-time database systems require not only ensuring transactions finished in the specified time limits (deadlines), but also guaranteeing temporal consistency of data objects accessed by transactions. In this paper we propose an optimistic concurrency control method based on Similarity, Importance of transaction and Dynamic Adjustment or Serialization Order called OCC-SIDASO. This method uses dynamic adjustment of serialization order, operation similarity and the transaction importance, for maintaining transaction timeliness level, minimizing transactions wasted restart, and guaranteeing temporal consistency of data and transactions.

Keywords: Real-time Database Systems, Optimistic Concurrency Control (OCC), Temporal Consistency, Serialization Order

Title of the Paper: Embedded Event and Trend Diagnostics to extract LDA Topic Models on Real Time Multi-Data Streams

Authors: Walisa Romsaiyud, Wichian Premchaiswadi

Abstract: Existing latent dirichlet allocation (LDA) methods make use of random mixtures over latent topics and each topic is characterized by a distribution over words from both batch and continuous streams over time. However, it is nontrivial to explore the correlation with the existence of different among multiple data streams, i.e., documents from different multiple data streams about the same topic may have different time stamps. This paper introduces a new novel algorithm based on the latent dirichlet allocation (LDA) topic model. The algorithm includes two main methods. The first method introduces a principled approach to detecting surprising events in documents. The embedded events and trends of the model parameters are used for filtering surprising events and preprocessing documents in an associated time sequence. The second method suits real time monitoring and control of the process from multiple asynchronous text streams. In the experiment, these two methods were alternatively executed and after iterations a monotonic convergence can be guaranteed. The advantages of our approach were justified through extensive empirical studies on two real data sets from three news and micro-blogging respectively.

Keywords: Latent Dirichlet allocation (LDA), Topic model, Asynchronous Text Stream, Time-Stamped Documents, Fuzzy K-Mean Clustering, Semantic Analysis

Title of the Paper: Pa-GFDP: An Algorithm Enhancing Reliability of WSNs

Authors: Guiping Wang, Shuyu Chen, Huawei Lu, Mingwei Lin

Abstract: Connectivity of Wireless Sensor Networks (WSNs) is a minimal requirement for their functionality. However, their distributed and self-organizing nature creates the presence of critical nodes, whose failures may partition the system or create communication bottlenecks. This paper focuses on enhancing reliability of WSNs, through detecting critical nodes and protecting them. The classical centralized algorithms of detecting critical nodes, which are based on DFS, require global topological knowledge. However, there are some dynamic factors in WSNs, such as frequent joining in and departure of nodes, unexpected failure of nodes due to running out of energy, and changes in network connections, etc. Consequently, the topology of WSNs is dynamic. Therefore, centralized algorithms are impractical. This paper extends the studies on GFDP (Grouping Fault Detection Protocol) and proposes a Partitioning-avoidance GFDP (Pa-GFDP) to enhance reliability of WSNs. Pa-GFDP cost-effectively detects critical nodes in WSNs and protects them. Without global information, the accurate detection of critical nodes can be accomplished with little traffic overhead and within limited time threshold. Pa-GFDP is verified to be correct and effective through simulation and experiments.

Keywords: Wireless Sensor Networks (WSNs); Reliability; Connectivity; Critical Node; Fault Detection; Pa-GFDP