Akram Darwish "Supervised Discovery of Morphologically Related Words in Arabic"
In this paper, we present an experimental study in discovering the morphological related words in a given Arabic corpus. The study tries to determine the main characteristics that can help to discover pairs of morphologically related words in Arabic. These characteristics extend the traditional orthographic similarities of a candidate pair to new types of features. We apply several supervised learning methods to test the contributions of these different features in discovering correct candidate pairs. The initial results show good contribution of some of the new features. These results indicate that we can profitably apply new types of feature to alleviate the difficulty of Arabic morphology discovery by building on the simple orthographic features in this difficult task.
Alastair Abbott "Ontology-Aided Product Classification: A Nearest Neighbour Approach"
In this paper we present a k-Nearest Neighbour case-based reasoning system for classifying products into an ontology of classes. Such a classifier is of particular use in the business-to-business electronic commerce industry, where maintaining accurate products catalogues is critical for accurate spend-analysis and effective trading. Universal classification schemas, such as the United Nations Standard Products and Services Code hierarchy, have been created to aid this process, but classifying items into such a hierarchical schema is a critical and costly task. While (semi)-automated classifiers have previously been explored, items not initially classified still have to be classified by hand in a costly process. To help overcome this issue, we develop a conversational approach which utilises the known relationship between classes to allow the user to come to a correct classification much more often with minimal effort.
Allan Tabilog "A Semantic Embedding of Separation Logic in Isabelle/HOL"
Separation logic is an extension of Hoare logic invented by Reynolds, O’Hearn, et.al. originally for reasoning about imperative programs that access shared mutable data structures. Over the past decade, separation logic has been extended to support reasoning about concurrent programs and object-oriented programs among others, and tool support has been developed for mechanised program verification. In this paper we present some initial results with respect to a shallow (semantic) embedding of separation logic in the Isabelle/HOL theorem prover. We discuss some key features of our embedding and outline some research plans towards the development of a framework for mechanically verifying concurrent programs using extensions of separation logic.
Andrew Meads "OdinTools – Model-Driven Development of Intelligent Mobile Services"
Today’s computationally able mobile devices are capable of acting as service providers as opposed to their traditional role as consumers. To address the challenges associated with the development of these mobile services, we have developed Odin, a middleware which masks complexity, allowing rapid development of mobile services. Odin, however, does not allow cross-platform development, which is an important concern with today’s wide variety of mobile devices. To solve this problem, we have designed OdinTools - a model-driven toolkit for cross-platform development of mobile services. Leveraging appropriate metamodels, a prototype has been implemented in Eclipse and Marama that allows developers to model mobile services in a platform-independent manner. We are currently working on transformations between levels of the model hierarchy which will allow full Odin-based service implementations to be generated automatically.
Ann Silva "Evaluation of Facial Expressions of Web Users"
This research investigated and simulated a proposed system for evaluating the human emotions of web users and classifying them to identify the user satisfaction. This paper describes a methodology for detecting, extracting and recognizing emotions for web user classification. The proposed method is based on the identification of web user’s facial expressions by initially detecting the face from the image using Successive Mean Quantisation Transform (SMQT) features and Sparse Network of Winnows (SNOW) classifier. Then the specific facial features are extracted using a technique called Edge Counting. After performing the facial feature extraction, motion analysis techniques are used to determine the relative spatial motion of the facial feature displacement, and then the motion vectors are sent to a neural network module to classify the web user in to three states, satisfied with the service that they are using, not satisfied or don’t care from the service providers perspective. The proposed system can be used by the service providers to monitor the user satisfaction and then to estimate the quality of experience in real-time. Future research will validate the efficacy of the proposed methodology and algorithms using data that will be collected through a survey. Validation is beyond the scope of this research.
Anna Huang "Learning pairwise document similarity"
Measuring the similarity between two documents is a crucial component in many problems such as document clustering, categorization and information retrieval. This paper presents a new method that uses machine learning to learn the document similarity measure. Representing documents with concepts instead of words have gained considerable research attention recently. Concepts and their semantic relations provide a new perspective for describing the connections between documents. This paper unifies various features depicting how closely two documents relate to each other, and employs them for learning the document similarity measure. The learned measure is evaluated against human judgements and compared to the cosine similarity measure, which is the most commonly used in the literature. Empirical results show that the learned measure achieve the same level of consistency as humans and outperforms the state-of-the-art methods.
Antti Puurula "Mixture Models for Multi-label Text Classification"
Multi-label classification tasks are becoming commonplace with increasingly larger databases. The majority of these tasks involve classification of texts, such as categorization of Wikipedia articles and tagging in social media websites. Current leading multi-label classifiers lack scalability to large sets of documents and labels. Mixture models offer a promising alternative with high performance and linear scaling to larger datasets in both training and classification. This paper provides a review of the existing work on multi-label mixture models for text classification, with an introduction to the theoretical preliminaries and a discussion on the future prospects of this line of research for large scale databases.
Aram Ter-Sarkisov "Estimating First Hitting Times of an Evolutionary Algorithm Using the Modified Binomial Model and Markov Chains"
We introduce the Modified Binomial Model, an application of the Binomial asset pricing model of stochastic finance to the analysis of evolutionary algorithms. We demonstrate its application by computing the expected first hitting time under this model for a simple GA and compare it to a model based on a Markov Chain, deriving upper bounds on the convergence rate under both models for the OneMax fitness function. While computational experiments show that the upper bounds under the models presented are rather loose, they provide some encouragement for further work on this method.
Ayesha Hakim "Extraction of Facial Changes using the Differential Images"
The technological world is moving towards more effective and friendly human computer interaction. A key factor of these emerging requirements is the ability of future systems to recognise human emotions, since emotional information is an important part of human-human communication and is therefore expected to be essential in natural and intelligent human-computer interaction. Extensive research has been done on emotion recognition, but virtually all of them deal with the full blown emotions despite the fact that the dynamic changes in facial appearance over time are important for categorisation of complex emotional states as well as accurate recognition of the basic emotions. In this paper, we propose a method based on normalized optimized differential images of two consecutive images or frames of videos. The proposed method is able to extract facial changes caused by happy, sad, angry, fear, disgust, and surprise emotions.
Benjamin McDonald "A Review of Automated Layout Techniques for Media on Large Screen Displays"
Advancements in technology are enabling larger display sizes at lower costs. Large, wall sized screens can now be found in public squares displaying movies, news and weather or in science labs displaying enlarged visualisations of data. Large display layout refers to the placing and sizing of content for users. Automated layout refers to the use of an algorithm to produce layouts without human guidance. Researchers have reviewed manual layouts for large screens and others have reviewed automated layouts in general. We review current automated layouts and evaluate their suitability to be applied on large screens. We discuss new challenges when applying automated layouts to large screens and evaluate current layout techniques.
Bing Xue "Overview of Evolutionary Algorithms for Feature Selection"
Feature selection is the process of selecting a subset of relevant features from a large number of original features to achieve similar or better classification performance and improve the computation efficiency. As an important data preprocessing technique, research on feature selection has been carried out over the past four decades. Determining an optimal feature subset is a complicated problem. Due to the limitations of conventional methods, many researchers introduce evolutionary algorithms to feature selection. This paper presents a review of evolutionary algorithms for feature selection, especially particle swarm optimisation (PSO). After describing the background of feature selection and PSO, recent work involving PSO and other evolutionary algorithms for feature selection is reviewed.
Craig Taube-Schock "High Coupling Is Unavoidable: An Empirical Study"
Dependencies between entities in a complex system affect its evolvability. To minimize the overall dependency in a software system, it is considered good design practice to organize code into modules and to favour within-module dependency over between-module dependency (termed coupling). Unfortunately, there is little empirical evidence concerning dependency and its effects in real software systems. In particular, it is not evident whether the maxim “low coupling/high cohesion” should be promoted to the extent commonly seen. To address this fundamental gap in our knowledge, we evaluate coupling in 100 open source software systems (the Qualitas corpus). Analysis reveals that the overall distribution of coupling in each of these systems (regardless of its maturity, its size, or the size of its user community) displays a consistent scale-free structure. Coupling is held low for most modules, but every system contains some modules that display high coupling. We consider how a well-structured system could continue to display areas of high
coupling—and why it must.
Editha Dimalen "Automatic Extraction of Genes, Diseases and Drugs Association from Biomedical Corpus on the Web"
The growth of biological literature online is continually increasing. As a result, creating a highly significant corpus of documents containing data and results from biological experiments is imperative for scientists and researchers. The corpus can be manually analysed to determine the association of genes, diseases, and drugs. The genes, diseases, and drugs association is crucial to further advances in the understanding and treatment of a disease. Unfortunately, the corpus is too big and too complex for human experts to analyse. Thus, automated exploitation of this corpus is essential. However, the development of such system is difficult in the sense that documents in the corpus are written in natural language. The proposed study will focus on the development of a system that will automatically determine the association of genes, diseases, and drugs using Natural Language Processing (NLP), Information Retrieval and Machine Learning via a boosting algorithm. The proposed study will also consider not only a specific web domain like Medline (PubMed) but the World Wide Web as the source corpus.
Ehsan Yazdi "Adaptive Resource Allocation for Mobile Body Sensor Networks"
One of the main problems affecting reliable transmission of wireless devices, like Wireless Sensor Network (WSN) nodes, is interference caused by sharing the unlicensed 2.4 GHz ISM band with other protocols such as IEEE 802.11 and IEEE 802.15.1. This work, acknowledges the impact of the realistic urban IEEE 802.11 and IEEE 802.15.1 RF interference on packet delivery performance in IEEE 802.15.4 Mobile Body Sensor Networks (MBSNs) and proposes the need for an adaptive resource allocation extension for IEEE 802.15.4 standard, which minimizes the energy consumption of the network by dynamically changing the communication channel and/or the transmit power of the network nodes, while trying to maintain a certain channel quality.
Fahim Abbasi "Information Detection as a means for Intrusion Detection in Honeynets"
This paper is an attempt to evaluate the effectiveness of the notion of information based intrusion detection techniques, that are implemented in our intrusion detection system (IDS). A relationship between Information theory, information distance compression, and piece-wise hashing is studied and applied to network intrusion detection. Some preliminary evaluation is conducted through detecting worms and synthesized worm-like packets with the proposed system in comparison with misuse detection systems like Snort.
Graham Jenson "Study on the Evolution of Component Based Systems using Component Dependency Resolution"
A component is an encapsulated unit of code that explicitly declares the dependencies on the environment it requires to be functional. If a user requires a new component to be installed, the components dependencies must be resolved in order to use that component. The task of resolving dependencies can be difficult and complex, therefore it is automated into functionality called Component Dependency Resolution (CDR). CDR is one of the significant benefits gained through the use of software components, it is used by systems like Apache Maven, Eclipse and Linux Distributions. CDR has now become the main function to evolve component based systems by adding, removing and updating components. A property of CDR is that many different systems can exist that satisfy a set of component dependencies, therefore CDR should not only identify any solution, but the optimal one.
Jan Larres "Performance Variance Evaluation on Mozilla Firefox"
Noise in performance evaluations of applications poses a problem for the detection of regressions, since the results become unreliable and can lead to unnecessary work searching for non-existent problems in the source code. This work tries to determine the factors that cause non-determinism and eliminate them as much as possible.
Jonathan Rubin "On Combining Decisions from Multiple Expert Imitators for Performance"
One approach for artificially intelligent agents wishing to maximise some performance metric in a given domain is to process a collection of training data that consists of actions or decisions made by some expert, in an attempt to imitate that expert’s style. We refer to this type of agent as an expert imitator. In this paper we investigate whether performance can be improved by combining decisions from multiple expert imitators. In particular, we investigate two existing approaches for combining decisions. The first approach combines decisions by employing ensemble voting between multiple expert imitators. The second approach dynamically selects the best imitator to use at runtime given their performance in the current environment. We investigate the combination of decisions in the domain of computer poker. In particular, we create expert imitators in the domains of limit and no limit Texas Hold’em and determine whether their performance can be improved by combining their decisions using the two approaches listed above.
Juan Rada-Vilela "Evolution of Morphology and Behavior of Virtual Creatures"
This paper presents a research on evolution of morphology and behavior of virtual creatures in a virtual world physically realistic. The morphology of creatures is formed by rigid bodies joined by spherical joints. Each rigid body has a sensor which measures orientation using quaternions, and an effector that exerts moments of force on its center of mass. The behavior is modeled by an Artificial Neural Network which receives quaternions from all sensors and transmits the moment of force to be exerted by effectors. Its architecture emulates a central nervous system by linking the orientation of each rigid body before orchestrating motion. Virtual creatures are evolved using Particle Swarm Optimization where each particle encodes the morphology and behavior of a single creature. Experiments revealed certain characteristics that improve the performance of these virtual creatures. Furthermore, evolved creatures outperform those by Miconi and Channon (2005) up to three times in average speed.
Kiattikul Sooksomsatarn "Secure Content Distribution Using Network Coding"
Network coding is a technique for maximising the use of available bandwidth capacity. We are interested in applying network coding to multimedia content distribution. This is desirable because many popular network applications for content distribution consume high bandwidth and international bandwidth; both are scarce in countries such as New Zealand. Existing work has addressed the use of network coding for content distribution, while work on network coding and security does not consider the tradeoff between quality of service and security for multimedia. Therefore, the focus of our work is on developing protocols that aim to address both open problems and validate the protocols using a combination of formal and simulation techniques.
Mahsa Mohaghegh "The Impact of Domain for Language Model in the PeEn-SMT: First Large Scale Persian-English SMT System"
This paper details recent work on PeEn-SMT, our Statistical Machine Translation system for translation between the English-Persian language pair. We explain how recent tests using much larger corpora helped to evaluate problems in parallel corpus alignment, corpus content, and how matching the domains of PeEn-SMT’s components affect translation output. The paper focuses specifically on the actual manipulation of the corpora used in the system, and how correctly matched domains for each corpus is more likely to yield better translation results. We then focus on combining corpora, and employing different approaches to improve test data, showing details of experimental setup, together with a number of experiment results and comparisons between them. Finally, we outline areas of our intended future work, and how we plan to improve the performance of our system to achieve higher metric scores, and ultimately to provide accurate, reliable language translation.
Michael Rinck "Personalized ubiquitous collaboration: Views on Information Objects"
Digital collaboration as we know it is generally bound to stationary, local devices. However, this limit to digital collaboration is fading, because of the development of smaller and better portable computing devices. This leads to new and unknown collaboration scenarios, as users are no longer limited by location or distance. We found, that current collaborating systems and document definitions no longer suffice for the increased requirements of these new collaboration patterns. This paper introduces our concept of views on information objects, that addresses these requirements.
Michael Walmsley "Automatic Glossing for Second Language Reading"
Research in second language (L2) acquisition suggests extensive reading (ER) as an effective learning strategy. ER involves learners reading lots of easy L2 text. Classroom ER programs use graded readers—simple books written specifically for L2 learners. However, the availability of graded readers is limited by the expense needed to create them. An alternate approach is to read L2 news articles with Computer software providing glosses, i.e., first language translations of words learners click on while reading. Since manual glossing of text is tedious, this project is exploring approaches for automatic glossing.
Michael Waterman "Reconciling architecture and agility: how much architecture?"
Architecture provides important structure and guidance to software development, but has very low priority in agile methodologies. As a result software developed using agile methodologies often has insufficient architecture and may suffer from deficiencies or increased development costs. This paper introduces proposed research to determine the optimum level of architecture in agile methodologies. The research uses a Grounded Theory method, based on the experiences of industry practitioners. Early results show that the experience of the architects, external constraints and template architectures all affect the architectural effort required in a development project.
Mofassir Haque "Routing Protocols for Content Centric Future Internet Architecture"
Currently, Internet is primarily being used for content distribution with an annual traffic growth of 50 %. Present Internet architecture faces a number of problems like scalability, security, lack of support for ubiquitous computing, management, in-efficient mapping, poor resource utilization etc. Content centric architecture has been suggested to overcome weakness of current architecture. Content centric architecture is the next step in architectural evolution of Internet. This architecture is based on concept of named data i.e. retrieving contents by name instead of location. It will support wide range of applications and will re-use successful feature of TCP/IP. There are number of challenges like location independent content routing, Quality of Service provisioning, trust management etc. which needs to be addressed. We will be working on development of routing protocols for content centric architecture as proposing an efficient and scalable routing solution is a challenging research problem. We will consider both evolutionary and revolutionary solutions for content centric routing.
Mohammed Thaher "An Efficient Algorithm for Computing the K Overlapping Maximum Convex Sum Problem"
This research presents a new efficient algorithm for computing the K Maximum Convex Sum Problem (K MCSP). Previous research has investigated the K Maximum Convex Sum Problem in the case of disjoint regions. In this paper, an efficient algorithm is derived for a similar but different problem, which is the overlapping case. This approach is implemented by utilizing the simplified (bidirectional) algorithm for the convex shape. This algorithm finds the first maximum sum, second maximum sum and up to the Kth maximum sum, based on dynamic programming. Findings from this paper show that using K Overlapping Maximum Convex Sum (K OMCS), which is a novel approach, gives precise and optimal results when compared with the results when using the rectangular shape. Additionally, this paper presents an example for a prospective application of using the Maximum Convex Sum Problem (MCSP) to locate a cancer tumour inside a mammogram image.
Muhammad Arfeen "A Framework for Resource Allocation in Cloud Computing Environment"
This paper presents a critical evaluation of current resource allocation strategies and their possible applicability in Cloud Computing Environment which is expected to gain a prominent profile in the Future Internet. This research attempts to focus towards network awareness and consistent optimization of resource allocation strategies and identifies the issues which need further investigation by the research community. A framework for resource allocation in Cloud Computing, based on tailored active measurements, has also been proposed. The main conclusion is that network topology, traffic considerations, changing optimality criteria along with dynamic user requirements will play a dominant role in determining future Internet application architectures and protocols, shaping resource allocation strategies in Cloud Computing, for example.
Muhammad Naeem "A Comparative Overview of Stream-based Joins"
Online stream processing is an emerging research area in the field of Computer Science. A number of common examples where an online stream processing is important are network traffic monitoring, web log analysis and real-time data integration. One kind of stream processing is to relate the information from one data stream to other data stream or disk-based data. A stream-based join is required to perform such operations. So far a number of join operators has been published in the literature that can process the stream in an online fashion but every approach has its own pros and cons. In this paper we address a number of well known join operators by putting them into two categories. In first category we discuss those join operators which take all their inputs in the form of stream while in second category we consider such join operator in which one input resides on disk. At the end of the paper we present summarized comparisons for each category on the basis of some key parameters. We believe that this effort will contribute in further exploring
this area.
Mumraiz Kasi "In-Network Semantic Data Integration for Energy Efficient Wireless Sensor Networks"
Over the last few years there has been a rapid technological development in Wireless Sensor Networks (WSNs). The use of WSNs have increased dramatically in numerous branches of industry and government such as habitat monitoring, traffic monitoring and object tracking. However, its resource scarcity remains a challenge. A primary concern is the processing of data at the sensor node level in a heterogeneous network for efficient monitoring. Traditionally in WSNs, the sensor nodes are used only for capturing data which is analyzed later in the more powerful gateways. This continuous communication of data, among the sensor nodes and with the gateway nodes, wastes energy at the sensor nodes and the overall network lifetime is greatly reduced. In this paper, we introduce an approach where sensor nodes will be able to perform data processing at the sensor node level. This will have several advantages, the most important being the extended network lifetime.
Pacharawit Topark-Ngarm "Mobile Client-Honeypot"
Today the smartphone has become more powerful than ever before and has become a significant proportion of the mobile phone with 15% market of all phones sold being classed as smartphones. We use smartphone more like we use a laptop computer to browse the Internet, send/receive emails, transfer files, watch, create and transmit multimedia and install new software. The smartphone malware threat has continuously increased in the last few years since the first mobile malware (cabir) was found on the Symbian mobile phone in 2004. Our target is to develop a mobile honeypot that will search for mobile malware and study the characteristics of these malware threats. We will extend the Capture-HPC developed by Christian Seifert et al. to work within the constraints of a smartphone platform. This technique provides a cost effective solution and goes beyond the smartphone constraint of processing power and power consumption. As a result of this study, we aim to better understand the characteristic of the current mobile malware as well as create instruments and analysis tools to better detect malware for smartphones.
Paul Mccarthy "fMRI preprocessing considerations for graph theory analysis"
This paper discusses the application of graph theory analysis to fMRI brain imaging data. In particular, the effects of standard fMRI preprocessing techniques upon graphs generated from fMRI volumes are explored.
A case study on real data is presented, whereby the effects of preprocessing upon graphs generated from the data are analysed.
Paul Schmieder "Intelligent Sketch Editing"
Advancements in technology now enable us to sketch on digital canvases on computers. One substantial advantage digital sketching environments have over their non-digital counterparts is editing features such as beautification and recognition. However, little attention has been paid to using those features and extending them to solve the fundamental differences between physical and digital environments such as the limited physical display space for digital sketching. We propose to extend advanced sketching features to ameliorate the problem of limited display space.
Quan Sun "Getting Even More Out of Ensemble Selection"
Ensemble Selection is an ensemble creation method which has received a lot of attention over time, not only because its implementation is fairly straightforward, but also for its excellent predictive performance on practical problems. Moreover, the method has been highlighted in winning solutions of many data mining competitions, such as the KDD cup and the Netflix data mining contest1. In this paper we present two methods that were used in our winning solution for the UCSD 2010 data mining contest2, and our empirical results showing that they can also be used for improving Ensemble Selection's performance in general. The proposed methods were implemented using the WEKA machine learning package and evaluated on a variety of benchmark data sets. The results indicate that, by an appropriate application of each method, Ensemble Selection's predictive performance can be further improved and its ensemble building cost can be significantly reduced.
Samuel Sarjant "CERRLA: Cross-Entropy Relational Reinforcement Learning Agent"
This paper presents a learning agent dubbed Cerrla (Cross-Entropy Relational Reinforcement Learning Agent) which utilises reinforcement learning through policy-search methods within relational environments. Given a relational environment in which the agent receives observations, takes actions and obtains rewards, the agent will create and optimise a set of concise, human-readable relational rules. The rule creation process is guided by the agent’s own observations of the environment, minimising the chance of creating invalid rules. A testbed of environments are video-games, which often deal with objects and have a clear reward function. Results indicate the algorithm learns high-achieving concise policies for both simple and difficult environments.
Simon Ware "Generalised Controllability"
Model Checking is the task of searching the state spaces of finite-state automata to see whether they satisfy certain properties of interest. One model checking property which often checked is that of controllability. This paper proposes a generalization of the original controllability problem to make it more useful. In addition we show that the efficient algorithms already developed for the purpose of verifying controllability can also be applied to the problem of generalised controllability.
Singh Jaspaljeet "Towards A Ubiquitous Patient-Centric Telehealth System"
The elderly population is growing fast in most developed countries. The aging population, high medical expenses, the shrinking number of health workers and elderly carers demand for more healthcare innovation that empower health consumers to manage their health independently from home. Telehealth is thought to be a solution for effective elderly healthcare. However, the usage of telehealth is limited by a design often centered around the requirements of the clinical user, healthcare provider, and the equipment vendor. Many existing systems suffer from high initial costs, cannot be extended by third parties, require extra costs to add new functionalities, and are designed to create a continuing revenue source for the vendor. Furthermore the systems are usually designed to manage diseases rather than prevent them, and do not address the social and psychological needs of the patient. Based on these shortcomings, we propose a novel web-based telehealth framework called Healthcare4Life, which is ubiquitous, extendable by third parties, contains social aspects, and puts the user in control. In contrast to previous work, we propose an open structure with a middleware-like functionality. The framework emphasises the need for social support and psychological factors influencing usage and compliance. In this paper, we describe telehealth usability requirements and analysis of existing consumer health informatics applications which leads to the design of a novel ubiquitous telehealth system.
Siva Dorairaj "Distributed Agile Software Development: Towards a Grounded Theory"
Agile methods are being used in globally distributed software development. Agile methods prefer team members to be collocated to allow face-to-face communication. In globally distributed software development, however, the team members are scattered across different geographic locations, and often across several time zones. We are exploring globally distributed Agile software development from the perspective of the Agile practitioners. In this paper we outlined the proposed research on distributed Agile software development. We aim to discover a grounded theory that explicates the critical success factors for distributed Agile software development, and the practical strategies adopted by the distributed Agile teams to overcome the challenges faced by the team members.
Sohaib Ahmed "Ontology-based Mobile Learning Content Adaptability"
The use of mobile devices is increasingly prevalent in many areas, particularly in learning. These devices propose the convenience of supporting access to web-based contents according to the needs and changing contexts of learners. Learning becomes more project-oriented and nomadic, in the sense that learners spend their daily life as “nomads” in transit between many physical places (“oases”) such as classrooms, labs, museums, home etc. Therefore, the concept of nomadic learning has been evolving in order to provide learning contents in accordance with different learning environments. Moreover, portable devices have distinct capabilities compared with desktop computers as they have different affordances. These differences increase the difficulty of presenting contents in a mobile learning environment. Therefore, we require quick and robust approaches for developing adaptive learning environments, which provide immediate response to learners in switching between different contexts. In order to construct such environments, it is necessary to comprehend the requirements for providing adaptive contents within different learning contexts. In this paper we have designed thin and smart client architectures to show how an ontology-based approach can help us to reuse the same web based learning contents across both desktop and mobilized learning environments.
Sonny Datt "Towards Practical P Systems: Discovery Algorithms"
In this paper we offer an introduction to P Systems, a relatively new computational model inspired by biology. Our paper covers four of the more popular state-capable systems and provides a brief explanation of each, including at least one problem that each specific P System solves faster than a standard Turing machine (some NP-Complete problems are solved in polynomial P Steps). After determining common elements across the systems we go on to describe discovery algorithms: a high level approach to solving topology problems within P Systems. Our description includes a mapping from our generic or high level system to the low level rule sets for each of the P Systems covered in this paper, as well as some documentation on the usage of our approach with an example.
Sook-Ling Chua "Unsupervised Learning of Patterns in Data Streams using Compression and Edit Distance"
Many unsupervised learning methods for recognising patterns in data streams are based on fixed length data sequences, which makes them unsuitable for applications where the data sequences are of variable length such as in speech recognition, behaviour recognition, text classification, etc. In order to use these methods on variable length data sequences, a pre-processing step is required to manually segment the data and select the appropriate features, which is often not practical in real-world applications. In this paper we suggest an unsupervised learning method that handles variable length data sequences by identifying structure in the data stream using text compression and the edit distance between ‘words’. We demonstrate that using this method, we can automatically cluster unlabelled data in a data stream and perform segmentation. We evaluate the effectiveness of our proposed method using both fixed length and variable length benchmark datasets, comparing it to the Self-Organising Map in the first case, where the results are promising over baseline recognition systems
Su Nguyen "An Application of Genetic Programming for Heuristic Selection"
Genetic programming based hyper-heuristics (GPHH) have become more and more popular in the last few years. Most of proposed GPHH methods heretofore focused on heuristic generation. This study investigates a new application of genetic programming (GP) in the field of hyper-heuristic and proposes a method called GPAM, which employs GP to evolve adaptive mechanisms (AM) to solve hard optimisation problems. The advantage of this method over other heuristic selection methods is the ability of evolved adaptive mechanisms to contain complicated combinations of heuristics and utilise problem solving states for heuristic selection. The method is tested on three problem domains and the results show that GPAM is very competitive when compared with state-of-the-art hyperheuristics. An analysis is also provided to gain more understanding of the proposed method.
Syahaneim Marzukhi "Accuracy-Based Learning Classifier Systems For Solving Complex Multiplexer Problems"
Learning Classifier Systems (LCSs), in general, are a rule based system (condition-action rules), which apply reinforcement learning (RL) technique to decision making process and use Genetic Algorithm (GA) for rule discovery. This paper gives an overview of the Learning Classifier Systems (LCSs) from the basic background and the implementation of GA and RL in the LCSs components. It also compares the traditional LCSs (Strength-based LCSs) with the Accuracy-based LCSs, which differs in several aspects. This paper verifies past work on the Accuracy-based LCSs, XCS by Wilson, which has become the most studied LCSs in the previous years, by investigating on how it can solve complex multiplexer problems using Roulette Wheel and Tournament Selection. Results show that XCS with Tournament Selection is competitive to solve complex multiplexer problem compared to XCS with Roulette Wheel Selection.
Tahir Mehmood "An Authentication Mechanism for Ultra-lightweight RFID Tag"
RFID is a new evolving technology for the objects identification using radio waves. It is widely used in the Supply Chain management, automatic payment, parking garage management and inventory control systems. With its more usage due to low cost, low power and light weight qualities more problems related to privacy and security are evolving. Cryptographic measures can be taken to prevent these problems but they can’t be applied due to the raise in cost. This paper presents an authentication model for low cost RFID tags thus preventing the copying of the tag ID and reducing tag cost.
Waqar Khan "Accuracy of a Stereo based Collision Avoidance System"
In a stereo configuration, the measurable disparity values are integral, therefore the measurable depths are discrete. This can create a trap for a safety system whose purpose is to estimate the trajectory of a moving object, and issue an early warning for the driver. Accuracy of this estimation is determined by the samples which have different measurable depths. Change in measurable depths becomes obvious for closer regions, however due the limited extent of stereo common field of view for these regions, the object might not be in stereo common field of view. A velocity estimation model has been designed, which takes into account the constraints of stereo and braking system, while determining an accurate estimate of object’s trajectory to safely warn the driver of a colliding object in time.
Yasir Javed "Using Ontology to Enhance Context-based Team Situation Awareness"
In this paper, we propose the use of the semantic technologies to enhance Context-based Shared Situation awareness. We have identified various problems involving situation awareness in emergency decision making and the importance of context for sharing this awareness. We demonstrate how domain ontology can be used to represent context-based situations and how the axioms and rules can improve shared situation awareness.
Yuliya Bozhko "Concept map-based framework for learner-centered knowledge management in ePortfolios"
This paper presents a proposed framework for ePortfolio knowledge management that has been developed in a project that aims to provide universities with an environment for lifelong learning support through integrating and extending Learning Management Systems (LMS) and ePortfolio Systems. The problems that exist using these systems to support lifelong learning were identified through in-depth interviews with lecturers and students at an earlier stage of the project. Specifically, this paper focuses on students' and lecturers' requirements for creating and organizing ePortfolios and helping learners to develop understanding of graduate attributes. For this purpose, we want to discuss a parallel between qualitative research steps of developing theories and the process of understanding concepts that students are going through while analyzing their learning. We show how the idea of organizing these concepts into maps can be adopted in the ePortfolio framework to facilitate knowledge management.