Volume 4, Issue 10, October 2012

SPDY: A new Protocol for the Web and its impact on Traffic Classification – An Overview
Michael Finsterbusch, Chris Richter, Klaus Hanßgen and Jean-Alexander Muller

This paper gives a short overview of the SPDY protocol. We point to the protocol design, protocol constraints and the implications to the TLS protocol. The impact of SPDY on different kinds of deep packet inspection and the impact on the TLS extension Next Protocol Negotiation are discussed. Furthermore, we observed the 100,000 most popular web sites to determine the use of SPDY support. Thus, we could see a widespread increase in the number of domains and web servers that support the SPDY protocol.

Keywords: Internet; Protocol architecture; Traffic analysis; Web server

Download Full-Text

Implementing new automation and programmable devices in controlling of traffic flow with use of passive defense approaches
Jalal Nakhaei

The goal of this paper is to state and evaluate the differences in gap acceptance observations between left lane and right lane change, and experiment overall aggressiveness by the means of right lane change behaviors and use of electrical instruments for reaching this goal, furthermore we use Digital Signal Processing on our controlling cameras to be able to distinguish different behaviours of drivers. Also, in this paper we evaluate the decision making process of drivers, we do this work with use of electrical sensors for accumulating some data and clarifying and processing them and finally with use of cumulative distribution functions of driver lane change behaviours from the observed field data. These experiments are performed for drivers using I-20 in Grand Prairie, Texas with the roadside controlling cameras and some other electronical controlling instruments which were amounted near the intersection of I-20 and Great Southwest Blvd. Our experiments and evaluations demonstrates, that the whole ratio of right lane change observations to left lane change observations was close to 3 to 1.

Keywords: Electronic devices, roadside control camera, lane change, digital image processing, electronic control systems

Download Full-Text

Predicting keen and eagerness of users to new emerging application on a cloud platforms and enhancing passive defense reliability on such networks via Kalman filtering with a relationship to recursive Bayesian estimation
Jalal Nakhaei and Mehdi Darbandi

Before the invention of computers if you want to find and use special information about something, you should physically go to libraries and if you’re luck and found your interested textbooks, you can find out something about that subject, this method has several negative points, such as, it consumes more of your time and your energy and even you may find nothing. But with the advent of computers and use of electronic versions of books and magazines you can find about your interested topic in a jiffy. Even with use of social and communicational networks – such as Cloud Computing – you can read other suggestions and opinions about your interested topic, even you can establish new workgroup through such networks and work on a specific project simultaneously from anywhere. Information Technology (IT) world which is became one of the vial necessities in current era, face with lots of revolutions and fundamental changes through this century. According to these changes, necessity to security of information, fast and agility in processing, dynamic accessing, has improved. Only concern of users and organizations is about security of information on cloud computing resources. For satisfying such important concern, all scientists and organizations all over the world do their bests to release new standards and also develop other standards about accessing to the information and using of them. The only purpose of all research papers and standards is to increase the security level of cloud computing. In each paper, each researcher introduces new methods in intensifying security of such networks. They applying evolutionary algorithms or using track and trace methods or even using intelligence methods for detecting and eliminating hazardous actions. Some of them have several positive points and some of them are not practical at all. In this paper, at first we tell about cloud computing – we define it completely and discuss about all aspects of this new generation of internet – after that when we understand about all of its concepts, a brief review about different uses of cloud computing on different industries and technologies were present. In the next section by the knowing of whole concepts and key features of this network and also by knowing the critical uses of this network on different industries we introduce Kalman Filter with relationship to recursive Bayesian estimation; which can be used for estimation and prediction of different parameters in such network. For example, we can use this for estimation and prediction about permeating of hackers or for predicting the amount of users that will use special part of our network in a particular period of time or by tracking and tracing the packets or even users. By performing such activities we’re able to eliminate malicious and spyware treatments from the beginning points of our network.

Keywords: Computer era, Cloud computing, evolutionary algorithms, Kalman filtering

Download Full-Text

A Comparative Study of Single-step and Multi-step Data Mining Tools
Dost Muhammad Khan and Nawaz Mohamudally

As a matter of fact there is unanimity that data mining is not a single-step process and the discover of knowledge from a dataset is the result of successive processes called multi-steps. The current data mining tools are designed to solve discrete consecutive tasks, such as classification or clustering, hence the tools turn out to be a single-step process and fail to produce the knowledge. Furthermore, in single-step tools the extraction of knowledge depends on the right choice of algorithms to apply and how to analyze the output, because most of them are generic and there is no context specific logic that is attached to the application. The choice of the algorithm remains ad-hoc in many data mining tools. The scientific community is very much conscious about this problematical issue and faced multiple challenges in establishing consensus over a unified data mining theory (UDMT) based on multi-step data mining processes. In this paper we draw a comparison between a single-step and multi-steps data mining tools. In single-step data mining tools the selection of algorithms is on ad-hoc based which is inadequate to produce the ‘knowledge’. On the other hand the multi-step data mining tool where the selection of the algorithms depends on the nature of the data provides the ‘knowledge’ to the user.

Keywords: Unified Data Mining Theory (UDMT), ODM, MS SQL Server, Unified Data mining Tool (UDMTool), Single-step Tool, Multi-step Tool, Unified Data Mining Processes (UDMP)

Download Full-Text

Serial Number based IPv6 Addressing Architecture in a Foreign IPv4 Network
Hamid Ali and Arbab Muhammad Arshad

Mobility of a mobile node can be achieved by using two IP addresses. The home address, used for identification of the TCP connection, is static and the care-of-address (CoA) changes each time the point of attachment changes [1]. Therefore the CoA can be thought of as the mobile node’s topologically significant address. IPv4 and IPv6 will co-exist for a long time as it is impossible to switch over the entire Internet to IPv6 overnight. That is why transition mechanisms have been devised to make the transition to IPv6 smoothly [2]. This paper proposes a solution for an IPv6 node to get address in an IPv4 address family network, in perspective of IPv6 and IPv4 integrated networks, while not restricting an IPv6 configured node to roam only in IPv6 address family network hence the proposed technique makes it able to roam a MN also into IPv4 address family network [3]. In the new addressing mechanism the mobile node assigns the DHCPv6 allocated 28-bits Serial Number from 5th to 32nd bits position and the 32-bits IPv4 address of the default router is assigned from 33rd to 64th bits position of the network part of newly generated 128-bits IPv6 care-of-address, at foreign network. Using IPv4 address of the default router in the CoA of the mobile node (MN) helps other routers in the Internet to identify easily the current location of the MN and to establish communication link between the MN and CNs. The main focus of our proposed technique is to allow an IPv6 configured MN to roam also to an IPv4 configured network and thus getting services in that different address family network.

Keywords: IPv6 address architecture, IPv6 care-of-address configuration, MIPv6 users, and Mobility management in Mobile environments

Download Full-Text

A Novel Algorithm to Determine the Quality of a Web Page for Improving the Search Engine Ranking Framework
Sheikh Muhammad Sarwar, Md. Mustafizur Rahman and Mosaddek Hossain Kamal

This paper proposes and develops a general and formal static rank computation algorithm for ranking web documents considering the availability, significance, appeal and relevance of the images present in them. Different types of images appear in a web document; some of them increase the content quality of the web page and some are deemed irrelevant considering the content. Moreover, some images are not appealing for catching the attraction of the users and they do not necessarily improve the content. In this paper, a static ranking algorithm like PageRank has been proposed, which works based on the analysis of the images appearing in the web document. A method for integrating this algorithm with a complete ranking framework, which is based on Markov Random Field Model has also been presented. The algorithm computes a metric IBQV (Image Based Quality Value) that demonstrates the extent to which images in a web document increase its value. The theoretical and practical implications of IBQV has been shown and experimental results indicate that incorporating IBQV increases the correctness of the search result.

Keywords: Information Retrieval, Web Page Ranking, Image Search Engine, Markov Random Field Model

Download Full-Text

A Hybrid GA - SA Algorithm for University Timetabling Problem
Nguyen Tan Tran Minh Khang and Tran Thi Hue Nuong

This paper presents a hybrid GA - SA algorithm for solving a university timetabling problem in Vietnam. The hybrid algorithm combines Genetic Algorithm and Simulated Annealing. It is tested on 14 real-world data and compared with other aproaches. The computation results indicate the effectiveness of the proposed algorithm.

Keywords: course timetabling, university school timetabling, curriculum-based course timetabling, metaheuristic, exploration, exploitation, genetic algorithm, simulated annealing, hybrid heuristics

Download Full-Text

Cloud Computing: Deployment Issues for the Enterprise Systems
Ammar Khalid, Yasir Fayyaz and Dost Muhammad Khan

Cloud computing has developed from being a gifted commerce idea to one of the top geared sector of the Information Technology. Now, declined organizations are progressively introducing themselves in this technology in order to achieve reliable services at minimal cost. But as small and medium size business are looking forward to adopt least economical computing resources for their business applications, there is a need to identify all the issues while deploying it. The paper highlights some of most critical issues along with some mitigating steps in order to achieve rewarding deployment. This also describes some future development work of under laying concept.

Keywords: IaaS, PaaS, SaaS

Download Full-Text

Enhanced Quality Metrics for Identifying Class Complexity and Predicting Faulty Modules Using Ensemble Classifiers
C. Neelamegam and M. Punithavalli

Software industry is increasingly relying on software metrics to determine the extent of software quality through defect identification. Detection of faults help to reduce time and cost spent during every phase of software life cycle. Identifying quality metrics is a challenging task due to the ever changing software requirements and increasing complexity and size of the software application. Using quality metrics for fault detection is a two-step process, where the first stages measures the complexity of the software module, which is used in the second step to predict faulty modules. In this paper, along with the traditional object oriented metrics, four new metrics are proposed. An ensembling classification model that combines three classifiers is proposed to predict faulty modules in C++ projects. The performance of the proposed system is analyzed in terms of accuracy, precision, recall and F Measure. The experimental results showed positive improvement in the performance of prediction with the inclusion of the proposed metric and ensemble classifier.

Keywords: Class Complexity, Defect Detection, Ensemble Classification, Objected Oriented Software, Quality Metrics, Software Quality

Download Full-Text