Wednesday 9th June
Session 1: 16:00 – 18:00
16:00 - 16:20: ‘The Cloud Adoption Toolkit: Addressing the Challenges of Cloud Adoption in the Enterprise.’ Ali Khajeh-Hosseini, David Greenwood, James Smith and Ian Sommerville.
Abstract: Cloud computing promises a radical shift in the provisioning of computing resource within enterprise. This paper: i) describes the challenges that decision-makers face when attempting to determine the feasibility of the adoption of cloud computing in their organisations; ii) illustrates a lack of existing work to address the feasibility challenges of cloud adoption in enterprise; iii) introduces the Cloud Adoption Toolkit that provides a framework to support decision-makers in identifying their concerns, and matching these concerns to appropriate tools/techniques that can be used to address them. The paper adopts a position paper methodology such that case study evidence is provided, where available, to support claims. We conclude that the Cloud Adoption Toolkit, whilst still under development, shows signs that it is a useful tool for decision-makers as it helps address the feasibility challenges of cloud adoption in enterprise.
16:30 - 17:00: ‘A decision-theoretic approach to resourcing plans in a policy-constrained environment.’ Chukwuemeka David Emele, Timothy J. Norman and Frank Guerin.
Abstract: In multi-agent societies, agents often inherit the social characteristics of the individuals or organisations they represent. These social characteristics (e.g policies/norms) often inform the behaviour of the agents. We present an efficient combination of argumentation, machine learning and decision theory for identifying, learning and modeling the policies of others, using argumentation-derived evidence, and for choosing who to talk to in a collaborative setting. In a set of experiments, we demonstrate that more accurate models of others' policies can lead to a statistically significant increase in performance (utilities), improvement in argumentation strategies and reduction in communication overhead.
17:00 - 17:30: ‘Portfolios of Constraint Models.’ Ozgur Akgun, Ian Miguel and Chris Jefferson.
Abstract: Constraint Programming (CP) is one of the most powerful model based decision support systems, especially for problems of a combinatorial nature. There are many ways in which to model a given problem. The model chosen has a substantial effect on the solving efficiency. It is difficult to know what the best model is.
To overcome this problem we take a portfolio approach: Given a high level specification of a combinatorial problem, we employ non-deterministic rewrite techniques to obtain a portfolio of constraint models. The specification language (Essence) does not require humans to make modelling decisions; therefore it helps us remove the modelling bottleneck.
17:30 - 18:00: ‘What Are You Like? Forming Trust through Stereotypes in Multi-Agent Systems.’ Chris Burnett, Timothy J. Norman and Katia Sycara.
Abstract: Trust and reputation are crucial concepts in open, dynamic multi-agent systems, as agents must rely on their peers to perform as expected, and learn to avoid untrustworthy partners. In such systems, agents may form short-term ad-hoc groups, such as coalitions, in order to meet their goals. However, ad-hoc groups introduce issues which im- pede the formation of trust relationships. This paper describes a new ap- proach, inspired by theories of human organisational behaviour, whereby agents generalise their experiences with known partners as stereotypes and apply these when evaluating new and unknown partners. We show how this approach can complement existing state of the art trust models, and enhance the confidence in the evaluations that can be made about trustees when direct and reputational information is lacking or limited. Finally, we outline our future work towards generalising the described approach to address the issue of trusted coalition formation.
Thursday 10th June
Session 2: 12:00 - 13:15
12:00 - 12:25: ‘Two Novel Ant Colony Optimization Approaches for Bayesian Network Structure Learning.’ Yanghui Wu, John McCall and David Corn.
Abstract: Learning Bayesian networks from data is an NP- hard problem with important practical applications. Difficult challenges remain in reducing computational complexity for structure learning in networks of medium to large size and in understanding problem-dependent aspects of performance. In this paper, we present two novel algorithms (ChainACO and K2ACO) that use Ant Colony Optimization (ACO). Both algorithms search through the space of orderings of data variables. The ChainACO approach uses chain structures to reduce computational complexity of evaluation but at the expense of ignoring the richer structure that is explored in the K2ACO approach. The novel algorithms presented here are ACO versions of previously published GA approaches. We present a series of experiments on three well-known benchmark problems. Our results show the ACO-based approaches might be favored for larger problems, achieving better fitness and success rate than their GA counterparts on the largest network studied in our experiments.
12:25 - 12:50: ‘Using machine learning to make constraint solver implementation decisions.’ Lars Kotthoff, Ian Gent and Ian Miguel.
Abstract: Programs to solve so-called constraint problems are complex pieces of software which require many design decisions to be made more or less arbitrarily by the implementer. These decisions affect the performance of the finished solver significantly. Once a design decision has been made, it cannot easily be reversed, although a different decision may be more appropriate for a particular problem.
We investigate using machine learning to make these decisions automatically depending on the problem to solve using the alldifferent constraint as an example. Our system is capable of making non-trivial, multi-level decisions that improve over always making a default choice.
12:50 - 13:15: ‘EventRobot system description.’ Sasa Petrovic, Miles Osborne and Victor Lavrenko.
Abstract: We present a system for rapid detection of new events from massive textual streams. The system is already deployed and is detecting new events from a continuous stream of Twitter micro-posts. We show that our system works with reasonable precision and how its performance varies with time of day and day of week. Though we detect events from the Twitter stream, our approach is general and applicable to any type of text stream (e.g., blogs, newswire).
Session 3: 14:15 - 15:30
14:15 - 14:40: ‘Blur Preserving Image Resizing.’ Tom Kelly.
Abstract: We present a technique to maintain the apparent depth of field in an image as it is resized. An algorithm is given to calculate the depth values for an image from the focus. Furthermore we show how to use this depth information to scale the image thus preserving visual cues as to the background and foreground.
14:40 - 15:05: ‘The effects of churn on complex search techniques.’ Jamie Furness and Mario Kolberg.
Abstract: The ability to perform complex queries is one of the most important features in many of the P2P networks actually deployed today. While structured P2P networks can provide very efficient look-up operations via a Distributed Hash Table (DHT) interface, they traditionally do not provide any methods for complex queries. Making use of the structure inherent in DHTs we can perform complex querying over structured networks by means of efficiently broadcasting the search query. This allows every node in the network to process the query locally, removing the restrictions placed on the complexity of queries. We refer to this method of searching within a DHT as broadcast search.
Comparing available broadcast search methods for structured P2P networks through simulation we see that churn, in particular nodes leaving the network, has a high impact on the performance. It is important that algorithms for P2P networks are thoroughly tested under churn as every deployed network will have at least some level of churn. By studying the strengths and weaknesses of the different approaches we aim to produce a solution which can perform well in a less optimal environment where we have high levels of churn yet limited available bandwidth.
15:05 - 15:30: ‘Lessons from the Failure and Subsequent Success of a Complex Healthcare Sector IT Project.’ David Greenwood, Ali Khajeh-Hosseini and Ian Sommerville.
Abstract: This paper argues that IT failures diagnosed as errors at the technical or project management level are often mistakenly pointing to symptoms of failure rather than a project’s underlying socio-complexity (complexity resulting from the interactions of people and groups) which is usually the actual source of failure. We propose a novel method, Stakeholder Impact Analysis, that can be used to identify risks associated with socio-complexity as it is grounded in insights from the social sciences, psychology and management science. This paper demonstrates the effectiveness of Stakeholder Impact Analysis by using the 1992 London Ambulance Service Computer Aided Dispatch project as a case study, and shows that had our method been used to identify the risks and had they been mitigated, it would have reduced the risk of project failure. This paper’s original contribution comprises expanding upon existing accounts of failure by examining failures at a level of granularity not seen elsewhere that enables the underlying socio-complexity sources of risk to be identified.
Friday 11th June
Session 4: 09:30 - 11:10
09:30 - 10:00: ‘Meaning Representation in Natural Language Categorization.’ Trevor Fountain and Mirella Lapata.
Abstract: A large number of formal models of categorization have been proposed in recent years. Many of these are tested on artificial categories or perceptual stimuli. In this paper we focus on categorization models for natural language concepts and specifically address the question of how these may be represented. Many psychological theories of semantic cognition assume that concepts are defined by human-produced features. Norming studies yield detailed meaning representations, however they are small-scale (involving only a few hundred words) and of limited use for a general model of natural language categorization. As an alternative we investigate whether category meanings may be represented quantitatively in terms of simple co-occurrence statistics extracted from large text collections. Experimental comparisons of feature-based categorization models against models based on data-driven representations indicate that the latter represent a viable alternative to feature norms.
10:00 - 10:30: ‘Hanoi: A Practical Typestate Model for Java.’ Iain McGinniss and Simon Gay.
Abstract: Many modern APIs contain interfaces in which methods have strict state based preconditions, while mainstream programming languages provide no facility to express these constraints in a concise, human readable, machine verifiable form. Failure to detect violations of these preconditions can result in partial or catastrophic failure of a system, therefore programmers must take great care in understanding and verifying the usage of such APIs. We present a simple modelling language, Hanoi, which can be used to express state restrictions in APIs. We have found it to be useful for the runtime verification of interface usage in Java, and alleviates the burden on the implementer of such interfaces to defend against incorrect usage.
10:30 - 11:00: ‘H2O: An Autonomic, Resource-Aware Distributed Database System.’ Angus Macdonald, Alan Dearle and Graham Kirby
Abstract: This paper presents the design of an autonomic, resource-aware distributed database which enables data to be backed up and shared without complex manual administration. The database, H2O, is designed to make use of unused resources on workstation machines.
Creating and maintaining highly-available, replicated database systems can be difficult for untrained users, and costly for IT departments. H2O reduces the need for manual administration by autonomically replicating data and load-balancing across machines in an enterprise.
Provisioning hardware to run a database system can be unnecessarily costly as most organizations already possess large quantities of idle resources in workstation machines. H2O is designed to utilize this unused capacity by using resource availability information to place data and plan queries over workstation machines that are already being used for other tasks.
This paper discusses the requirements for such a system and presents the design and implementation of H2O.