MSC Dissertation

In order to qualify for the award of an MSc. a student is required to propose, design and undertake a detailed study of a topic relevant to Computer Science, in their area of study (ASD, DA) and complete a dissertation. The student must be able to complete a body of work appropriate for a taught Master’s degree and should include an amount of originality. The sections below outline the practical steps and requirements that you will need to undertake the dissertation from planning your proposal to submitting your final dissertation.

The dissertation coordinator is Dr Lucas Rizzo.

Material should be submitted to lucas{dot}rizzo[at]tudublin(dot)ie


Key Information on the MSC Dissertation process

Aim of dissertation

The aim of the dissertation is to enable students to apply the skills and knowledge gained during previous stages of their study to the analysis of a specific problem or issue relevant to their programme of study (ASD or DA), via a substantial piece of independent work carried out over a 3 month (one semester) period. Further, the aim is for students to demonstrate proficiency in the design of a research project, application of appropriate research methods, collection and analysis of data, and presentation of results.

Dissertation manuscript

The dissertation completed will be a substantial piece of written work in the region of 60/70 pages (approx 20,000 words) of core chapters (Introduction chapter – Conclusion chapter), with a maximum page limit of 120 pages overall. It is important to note that the length of your dissertation will depend on the topic and material that you are including and remember quality is far more important than quantity.

Examination and marks

Your dissertation will be examined by a panel of lecturers, chaired by the dissertation coordinator. The marking scheme that will be used can be found below. If you pass the dissertation and correctly submit the final version, you will be awarded a master degree at the next available graduation ceremony. If you fail you will be awarded a postgraduate diploma at the next available graduation ceremony.

Deadlines

Note: this table remains consistent across all years, with modifications only to ensure that days do not align with weekends, bank holidays, and similar events.

(*) These dates apply to part-time students only

(**)This dissertation option is only available to full time students who have applied and received approval. Requests for approval are made to the Dissertation Coordinator in writing and need to be supported by appropriate reasons. Progression to project is also subject to availability of a supervisor who is willing to supervise over the Summer.

(***) The exact list and order of presentations will be schedule roughly 10 days before the scheduled days, shortly after the submissions of the last batch of theses. Presentations will be during the day, and not in the evenings as they are official TU Dublin examinations. Planned dates are usually available 6 months in advance, to allow students to plan accordingly. These dates cannot be changed for a number of reasons including room availability, supervisor, second reader and coordinator’s duties as well as constraints dictated by the TU Dublin exam office.

Research topic ideas

The following are suggested research topics provided by lecturers in the school and you are welcome to create your own project idea or use these suggestions as inspiration to define a new topic for your dissertation. If you are interested in any of these topics you should contact the relevant lecturer who has written the project idea to find out more information or to discuss how you might approach the topic for your proposal. Some topics may be high level and multiple students could take different approaches to working on the topic but others are specific and you need to check if there is already another student working on the research problem.

Please note: You cannot choose your supervisor. Supervisors will be allocated centrally following the submission and evaluation of dissertation proposals. If you have worked with a particular lecturer on a project proposal, you can indicate that you would like to work with them as part of your submission, however there is no guarantee that they will be assigned as your supervisor as this may not be possible.

Individual projects

This project will look at the development of an open data resource to support ML based compliance auditing of countries such as Ireland to meet the COP26 climate goals. Through Literature surveys and engaging with stakeholders the project will investigate what is required for this task in terms of both data resources and data analytics. From this it will identify data sources both local and international to support this work. Appropriate pre-processing and analytic techniques from data Science will be trialled through case studies. This project will establish the requirements of hosting this data according to open data standards. These will include how best to manage the data integrity of stakeholders and will propose a legal framework around the sharing of data. It will look at the technical details of storing and sharing data through warehousing, API construction and secure access.

In some high income countries and many low to middle income countries the ability of health care staff to read elbow x-rays of children is poor. There is a standardised process for the reading of an elbow x-ray which is related to the child’s age and very specific lines to observe (e.g. anterior humeral line or radiocapitellar line). The goal is to evaluate the feasibility of a smartphone application to assist the clinician to provide real time with the interpretation of an image. 2 options are proposed: 1. Taking a photograph of the image and upload it to the application to assists in the provision of the lines and areas of interpretation. 2. Hold the smartphone (whilst in an application) to an image and allow the application to provide the lines and areas of interpretation to assist the clinician reaching the correct diagnosis.

The method of producing an x-ray in low and middle income countries may be electronic or acetate (figure 1) and is electronic (figure 2) in high income countries.This project aims to develop a deep learning model for land use classification using satellite imagery and GIS data. The first step is to identify suitable data sources, for example for sentinel2, etc. and finding corresponding GIS information, including land use labels. The methodology involves training a convolutional neural network (CNN) to extract spatial features from the imagery and integrate GIS data as input features. The model tries to be capable to classifying land use categories, such as urban, agricultural, and forested areas. This project has practical applications in urban planning, environmental monitoring, and natural resource management.

This project focuses on predicting traffic flow and congestion patterns in urban areas using deep learning and GIS. Traffic data, such as provided by TII, could be used to facilitate this project. The methodology involves building a recurrent neural network (RNN) model that integrates spatial information from GIS data, such as road networks and traffic signal locations. The model will predict traffic congestion, helping commuters with route planning and city authorities with traffic management strategies.

This project addresses the enhancement of satellite imagery resolution for improved environmental monitoring. We will leverage a dataset of multi-resolution satellite images along with GIS-based terrain and land cover information. The methodology involves training a deep learning model, such as a Generative Adversarial Network (GAN), to upscale lower-resolution satellite images while preserving critical details. The high-resolution images will enhance the accuracy of applications like land cover analysis, disaster response, and ecological studies.

In this project, we explore the development of a deep learning model for generating natural language descriptions for geospatial images. The required dataset includes geospatial images with associated location and environmental data. The methodology involves using a combination of Convolutional Neural Networks (CNNs) to analyze images and Recurrent Neural Networks (RNNs) for language generation. By integrating GIS data, the model can provide contextually relevant and detailed image captions. This technology benefits navigation systems, accessibility for visually impaired individuals, and location-based content generation.

This project focuses on the early detection of forest fires using remote sensing and deep learning. The required dataset includes satellite imagery and historical wildfire data. The methodology involves training a deep learning model, such as a CNN or other suitable architectures, to detect smoke plumes and other fire-related patterns in satellite images. The system could also provide real-time alerts and wildfire locations, enabling rapid response by firefighting teams. This project contributes to improved forest fire management and environmental conservation efforts.

Machine Learning (ML) has become very popular in the last decades because it allows creating models using previous information with little human intervention. ML has been widely used for classifying and predicting values. Nevertheless, this project is focused on a branch of ML called Reinforcement Learning (RL). In RL there is an agent that moves from one state to another state getting a positive or a negative reward. The agent has to learn which are the best actions for each state by maximizing the total rewards from the initial to the final state. This is called the optimal policy. RL has made breakthrough achievements in the last few years such as beating professional players in the game of GO which remained as one of the biggest challenges in Artificial Intelligence. The project is aimed at optimizing the best possible combinations for reducing the energy bill of an industrial company. To this end, we will simulate the energy consumption of a certain company and then, we will implement an RL algorithm like Deep Q-Learning to find out which are the best actions at each point to reduce the energy consumption as much as possible.

Whenever a county is hit by a pandemic, Governments must take the best actions to guarantee the public health of the country but also reduce the negative impact of the pandemic on the economy. The Government can define different phases that can go from more to less restrictive. But it is crucial to develop tools to help them in taking the best planning for the phases given the priorities of the government. In this work, we develop a model based on Reinforcement Learning (RL) to help governments to combat viruses’ pandemics like the one of COVID. To this end, we will implement an SEIR model to represent the spread of the virus on the population. We will compare our approach based on RL with other optimization approaches such as Genetic Algorithms or Ant Colony. We expect to provide new insights on this topic to help countries managing pandemics in a better way.

The recent increased use of “black-box” systems in Artificial Intelligence has led to an issue of trustworthiness and understandability between end-users operating such systems. The inability to explain the inferences of these systems severely affects their applicability. Theoretically, approaches exist that enable interpretability and understandability of “black-boxes” systems by humans. However, these are difficult to be implemented in practice due to the high amount of deductive, declarative knowledge required by experts. This project is aimed at tackling this problem by taking advantage of the precision and accuracy that can be achieved by machine learning in modelling tasks and the explanatory capacity of defeasible argumentation, a novel emerging paradigm in AI. This paradigm is a technique for modelling reasoning under uncertainty by using the notions of arguments and contradictions among them, routinely employed in human reasoning activities.The proposal is to automatically extract arguments, essentially propositional rules, from “black-box” models learnt with machine learning. In turn, potential contradictions (attacks) among extracted arguments are proposed to be heuristically identified. This network of arguments and attacks can be validated by one or more end-users, maintaining the interpretability and understandability of the proposed solution.  At the same time, quantitative inferences can be produced by this same network of arguments with transparent algorithms from the literature of defeasible argumentation. It is expected that these inferences will present similar predictive power to the exploited machine learning models.

Gravitational waves are disturbances in the curvature of spacetime, generated by accelerated masses, that propagate as waves outward from their source at the speed of light. Detecting these types of “rare” events is of extreme importance. This project will focus on the construction of a Deep Learning autoencoder for the automatic identification of gravitational waves. In detail, it will focus on the construction of a visualisation layer of the latent space (learned by the autoencoder) that will help human understand the salient hidden patterns inherent to gravitational waves.

Argumentation has advanced as a solid theoretical research discipline for inference under uncertainty within Artificial Intelligence. Scholars have predominantly focused on the construction of argument-based models for demonstrating non-monotonic reasoning adopting the notions of arguments and conflicts. However, they have marginally attempted to examine the degree of explainability that this approach can offer to explain inferences to humans in real-world applications.

This proposal concerns the application of argumentation to wrap machine learning models with argumentative capabilities.

Computational Trust applies the human notion of trust to the digital world that is seen as malicious rather than cooperative.

Trust factors could be promising in assessing the trustworthiness of virtual identities interacting in an open environment. This proposal concerns the application of computational trust to online communities such as Wikipedia, Stackoverflow where content is created by humans collaboratively. In details, the goal is to create a model of trust to evaluate the trustworthiness of information in online communities. This can be done employing machine learning, either unsupervised or supervised.

Past research in HCI has generated a number of procedures for assessing the usability of interacting systems. In these procedures there is a tendency to omit characteristics of the users, aspects of the context and peculiarities of the tasks. Building a cohesive model that incorporates these features is not obvious. A construct greatly invoked in Human Factors is human Mental Workload. Its assessment is fundamental for predicting human performance. This proposal focused on empirical research on the investigation of which factors mainly compose mental workload and their impact on task performance. A user study was carried out with participants executing a set of information-seeking tasks over three popular web-sites (a dataset is ready). The goal is to investigate if Supervised Machine Learning techniques based upon different learning strategy can be successfully employed for building models of mental workload aimed at predicting classes of task performance and extract those factors that contribute the most to achieve this goal.

Deconvolutional networks are convolutional neural networks (CNN) that work in a reversed process. Deconvolutional networks strive to find lost features or signals that may have previously not been deemed important to a CNN’s task. A signal may be lost due to having been convoluted with other signals. The deconvolution of signals can be used in both image synthesis and analysis. This project will aim to build a deconvolutional neural network that work with EEG signals, and to visualise CNN learnt representations by back-projecting them to the original input space and allowing a deeper interpretation.

Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree while having the advantage of being more transparent. This project will focus of creating a solution to the above problem with EEG data.

An Autoencoder is a neural network, an unsupervised learning algorithm which uses back propagation to generate output value which is almost close to the input value. Different types of autoencoders exist: sparse, stacked, variational. Designing an architecture is itself a challenge as it includes the design of hidden layers that can be convolutional, dense and recurrent as well as the number of neurons and various hyper parameters. This project will focus on this challenge and in particular to the development of an autoencoders for noise reduction in EEG signals. This will be compared against the well-known Principal Component Analysis (PCA) algorithm.

The demands of evaluating usability of interactive systems have produced, in the last decades, various assessment procedures. Often, in the context of web-design, when selecting an appropriate procedure, it is desirable to take into account the required effort and expense to collect and analyse data.

For this reason the notion of performance is acquiring importance for enhancing web-design. However, assessing performance is not a trivial task and many computational methods and measurement techniques have been proposed. One importance construct that is strictly connected to performance is human Mental Workload (MWL). (often referred to as Cognitive Workload). This study aims to assess subjective mental workload over web-based tasks and investigate its correlation with objective indicators of tangible activity of online users in the browser (mouse movement, scrolling, clicking, focus – using jQuery and Javascript).

Mental workload is probably the most invoked concept in Ergonomics. In a nutshell it can be intuitively defined as the amount of mental work necessary for

a person to complete a task over a given period of time. Several MWL assessment procedures have been proposed in the literature. Example includes the subjective instruments NASA task load index and the Workload Profile. Similarly, learning can be quantifies in several ways. The aim of this project is to adopt tag clouds visualization techniques to quantify learning before and after teaching sessions and to relate it to mental workload indexes.

Cognitive Load Theory (CLT) has been proposed as a means for designing instructional material and delivering it in a way that learning is maximized. Under this perspective, it is evident that CLT becomes extremely important in third level education. An adequate mental workload is a condition in which there is a balance between the intrinsic difficulties of a task/topic, the way it is presented to the learners (extraneous load) and the amount of effort performed by the learner to allocate the new knowledge into the old one (germane load). However, the experience of cognitive workload is not the same in every single individual, varying according to cognitive style, education, and upbringing. The aim of this project is to use CLT jointly with computational methods for mental workload representation and assessment to quantify the mental load imposed to learners by teaching activities and instructional material. The assumption is that if learners experience an optimal cognitive load of a class, their learning is optimized. It turns out that, if their cognitive load can be quantified after any teaching session, this quantity can be used as a form of empirical evidence to select the most effective teaching method and/or instructional material in a given context and for a given audience.

The projects aims to investigate the relationship between quantified cognitive load of learners and their quantified learning during teaching sessions.

The demands of evaluating usability of interactive systems have produced, in the last decades, various assessment procedures. Often, in the context of web-design, when selecting an appropriate procedure, it is desirable to take into account the required effort and expense to collect and analyse data. For this reason, web-designers have tended to adopt cheap subjective usability assessment techniques for enhancing their systems. However, there is a tendency to overlook aspects of the context and characteristics of the users during the usability assessment process. For instance, assessing usability in testing environments is different than assessing it in operational environments. Similarly, a skilled person is likely to perceive usability differently than an inexperienced person. For this reason the notion of performance is acquiring importance for enhancing web-design. However, assessing performance is not a trivial task and many computational methods and measurement techniques have been proposed. One importance construct that is strictly connected to performance is human Mental Workload (MWL). (often referred to as Cognitive Workload). Several MWL assessment procedures have been proposed in the literature but a measure that can be applied for web-design is lacking. Similarly, recent studies have tried to employ the concept of MWL jointly with the notion of usability. However, despite this interest, not much has been done to link these two concepts together and investigate their relationship.

The aim of this research study is to shed light on the correlation of these two concepts and to design a computational model of mental workload assessment that will be tested with user studies and empirically evaluated in the context of web-design.

Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing decision-making. This project aims to study the impact of defeasible reasoning and formal models of argumentation theory for supporting and enhancing decision-making. Multiple fields of application will be tested against state-of-the-art approaches: decision-making in health care, multi-agent systems, trust and the Web.

Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing knowledge-representation. This project aims to study the impact of defeasible reasoning and formal models of AT for enhancing the representation of the ill-defined construct of human mental workload (MLW), an important interaction design concept in human-computer interaction (HCI). The argumentation theory approach will be compared against other knowledge-representation approaches.

The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is an emerging discipline within Artificial Intelligence. It is aimed at increasing the reliability, trust and performance of electronic communities and online information. Computer science has moved, in the last decades, from the paradigm of isolated machines to the paradigm of networks and distributed computing. Similarly, Artificial Intelligence is quickly shifting from the paradigm of isolated and non-situated intelligence to the paradigm of situated, collective and social intelligence. This new paradigm as well as the emergence of the information society technologies are responsible for the increasing interest on trust and reputation techniques applied to public online information, communities and social networks. This study is aimed at investigating the nature of trust, the factors that affect trust of online information and the design of a computational model for assessing trust. This will be evaluated in empirical terms with user studies involving several online web-sites and people.

This research investigates the behaviour and dynamics of online communities, including models of reputation and trust. It focuses specifically on the investigation of how financial online communities react to market crashes and the predictive power of such communities. This applied research makes use of a multidisciplinary set of techniques such as data- and text-mining techniques, along with econometric approaches, network analysis, agents, user modelling and trust.

Is Wikipedia (positively or negatively) biased against some cultures?

Wikipedia is present in more than 250 languages. It is reasonable to think that each Wikipedia version represents the point of view of a specific culture and country. Even when a common topic is presented across multiple versions (written in different languages), it is reasonable to think that the importance and emphasis given to the topic depends on the cultural background of the writers of the articles. This variable importance could be negligible for some neutral topics, but it could be quite significant for controversial topics or topics strongly representative of cultures. For instance, it is likely that an article about Oscar Wilde is better described in the English version than in the Russian, while a major Russian writer such as Checkov or Tolstoj could have more relevance in the Russian Wikipedia, likely with a higher importance than Shakespeare. However, Wikipedia guidelines underline how Wikipedia articles should follow a neutral point of view, where every topic is presented in a fair, balanced and objective ways.

Therefore, a question aries: are the differences among version of Wikipedia still compatible with a neutral point of view or they are a consequence of a bias towards some topics and cultures? The project aims to collect a set of topics common to different Wikipedia version, and quantify the bias that each version has towards its own culture or other cultures. After having selected a group of Wikipedia version, a selection of articles will be defined and extracted from a Wikipedia dump. The techniques used will include social network analysis and data analysis. No text analysis is required. The core idea will be to design and quantify a level of importance of an article in a each Wikipedia version, and then check if the difference of this level of importance across multiple versions is statistically significant or it is tolerable.

In the field of Computing, online communities of practice such as Stack Overflow have great potential for educational purposes. Michael Staton suggested technology firms now consider Stack Overflow to be “the new Computer Science Department where people go to learn”. A recent survey conducted among Irish lectures and students revealed how more than 75% of lecturers have already used Stack Overflow, 82% admit to have learned something from it, half of them think SO could be used in teaching Computing, 30% thinks Stack Overflow explains concepts better than a University textbook, 35% of students think that SO always or often explains concepts better than their lecturers. However, both students and lecturers complained about the lack of structure and organization of the StackOverflow material. If this material has to be used for education (and not only for quick reference), it must be better organized, good stuff should be filtered, it should be easier to navigate and the learner should understand and be guided through the pre-requisites of each question. Computing is a highly interconnected discipline. For instance, writing a simple PHP script requires an understanding of HTML, DB connection, SQL, and bash scripting. If we are to bear learner in mind, the most crucial thing is to provide her/him with a compass to safely navigate the learning material. Moreover, not all the content of StackOverflow is suitable for learning. Some of the Q&A are quick cut&paste code fixing, while other pages are complete and sound discussions and presentations of a Computer Science topic that can have a great potential for educational purposes. The aim of the project is to provide tools to leverage and better exploit the learning potential of Stack Overflow. The following are few ideas that the project might explore. First, using text mining and data analytics techniques, the project aims to separate the more interesting material for teaching purposes from the pure technical tips and tricks content. Second, using text analytics and visualization techniques, the project could visualize a network of Computer Science concepts emerging from the content of StackOverflow, and show the links and pre-requisite between them. The network can be used by a learner as a compass to understand the links between concepts, to understand which concepts are the fundamental ones and to navigate among the questions in a meaningful way.

Finally, given a topic (such as “database normalization” or “file reading in C” or “inheritance in Java”), the project could develop a tool that will select automatically a set of Q&A relevant to the topic and with didactical value, that can be used by a learner for his practice or by a lecturer to design a tutorial/lab session. In this scenario, the question can be posed to the learner keeping the answer hidden, and reveal the answer to the learner only when appropriate.

There is a huge volume of video uploaded to the WWW each day. The problem of viewing and classifying this is an ongoing one. Sound is very informative about the nature and type of video. e.g. Sports videos, with crowds roars will have their own distinctive pattern. This project is to investigate and test the ability to classify video using the video sound. It will use machine learning techniques to develop a classifier. The classifier will use sound features of the video, based on analysing and picking out features in the digital sound signal.

This project would be to implement a parser for the Z Specification language, using the Haskell programming language. The parser should be compliant with the official Z standard as far as possible.

This project is to investigate and develop a model of swarm intelligence. The basic architecture of a swarm is the simulation of collections of concurrently interacting agents: with this architecture, you can implement a large variety of agent based models. One interesting application is the modelling of crowd behaviour in emergency situations

This project will focus particularly on kinematics (measurement of the movement of the body in space) using simple computer vision techniques to identify polynominal splines and create a “stickman” figure which can be overlayed beside the original actor using augmented reality techniques.

Puppy Linux is a distribution that provides users with a simple environment, and can be used to recover files (as long as they aren’t NTFS). This project suggests that you extend the functionality of Puppy Linux to recover NTFS files.

This project will look at MIT research at the intersection of vision and graphics who created an algorithm that offers its users a new way of looking at the world. The technique which uses an algorithm that can amplify both movement and colour, can be used to monitor everything from breathing in a sleeping infant, to the pulse in a hospital patient. Its creators, led by computer scientist William Freeman, call it “Eulerian Video Magnification” https://www.youtube.com/watch?v=3rWycBEHn3s

Knowledge bases are key to being able to store and reuse knowledge. Examples are less structured bases such as Wikipedia, or more structured ones such as DPPedia or ConceptNet. This project is about building a visual knowledge base from  a labelled image set, using object recognition – and building an associated knowledge base.  The concept of image recognition can then be proved by matching unseen images through the knowledge base to see if they are recognised. A knowledge base that doesn’t appear to exist yet is one based on visual images. To explain the concept here, let’s build up the scenario.  For example, if a human were to look at a picture of a street, they would be able to name objects in the images such as car, lamppost, house – and so on.  The associated knowledge is that a street “contains” or is associated with cars, houses etc.  The human uses this type of knowledge to recognise any street image.  If we then pass many images through this process, useful knowledge  can be generated .  This knowledge, if captured, could then be used to recognise images in an unsupervised way – based on its content.  E.g. unlabelled street image – contains cars/ street etc – so matches to “street”. In turn, this could support recognition of unseen images, similar to “zero shot learning”.

Trained machine learning models contain knowledge. They have learned the patterns within the data that link to whatever the output task labels are – classification, object detection.  Take for example, a machine learning model that predicts fraudulent insurance claims: It trains on instances of insurance claims that are labelled as “fraud/ not fraud” – and then the underlying algorithm define a link between input features/ values and output predictions. i.e. the model “knows” what aspect of the input claim is indicative of fraud or not.   The knowledge in this example model is valuable- insurance companies would be very interested to confirm, from their data, what particular features are more indicative of fraud!  To discover this knowledge, it should be possible to use feature selection, whereby the most important features of the model can be identified. Likewise, it may be possible to use explanations – whereby each decision of the model can be explained against its input features.  This paves the way to gleaning reusable knowledge from machine learning models.  This project is about researching the state of the art on extracting reusable knowledge from machine learning models of structure (tabular) data.   To make the project more complete, the idea would be to also apply or even expand on a technique in some form of proof of concept, or experimental work.  That really depends upon the findings in the state of the art.

The Seven Principles of Universal Design, modified by O’Leary and Gordon (2011) are: 1. Equitable Use: The design is useful and marketable to people with diverse abilities. 2. Flexibility in Use: The design accommodates a wide range of individual preferences and abilities. 3. Simple and Intuitive Use: Use of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level. 4. Perceptible Information: The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities. 5. Tolerance for Error: The design minimises hazards and the adverse consequences of accidental or unintended actions. 6. Use of Design Patterns: To make the code easier to understand, and easier to extend, use pre-exiting patterns. 7. Consider the User: Make the user the centre of the whole process. Understand the range of users of the system.

This project seeks to develop a parser that will scan Python code to determine how closely the code adheres to the above principles. NOTE: This is not looking at the user interface produced by code to assess it’s universal designedness, it is looking at the code itself, and seeing how universal designed the code is.

Psychogeography is defined as the “the study of the precise laws and specific effects of the geographical environment, consciously organized or not, on the emotions and behavior of individuals.” One of the typical approach is to draw a large circle at random on a map, and travel along that circle, commenting on what you see, what you hear, how it feels, (e.g. look at the metal drains, are there dates on them? look at shapes of the buildings, and how the telephone wires and electricity wires snake around them, etc.). This project will seeks develop a random route generator (circular), and create a means by which the experiences can be recorded, photographed, etc. and automatically formed into a webpage.

The Shakespeare Apocrypha is the name given to a group of plays (e.g. Sir Thomas More, Cardenio, and The Birth of Merlin) that have sometimes been attributed to William Shakespeare, but whose attribution is questionable for various reasons. Using Stylistic statistical-based metrics, e.g. Zipf analysis, Sentence Length, Sentence structure, words used, tense, infrequent n-gram occurrences, active vs. passive voice, etc. and the development of other suitable metrics as part of the project, similarities will be measured between the canonical tales and the apocryphal ones.

This project is to investigate and develop a series of Learning Objects (a unit of educational content delivered via the internet), using eith the IMS Content Packaging or the SCORM (Sharable Content Objective Reference Model) standard. As well as having the standard learning object parameters, the learning objects for this project will be aware of how learning style can effect presentation means.

One of themes of George Orwell’s 1984 was that the government was simplifying the English language (both vocabulary and grammar) to remove any words or possible constructs which describe the ideas of freedom and rebellion. This new language is called Newspeak and is described as being “the only language in the world whose vocabulary gets smaller every year”. In an appendix to the novel Orwell included an essay about it, and the basic principles of the language are explained. The objective of this project is develop a text filter that will take in normal text, and convert it into Newspeak. An initial system will simply change the words in the text to their equivalent in Newspeak, e.g. “bad”, “poor”, “lame” all become “ungood”; “child”, “children”, “boy”, “girl” become “young citizens”; “quite”, “rather”, “kind of”, “kinda” become “plus”. From there the next stage is to investigate the more fundemental translation process, whereby the grammar and structure of the text is changed to the style as outlined by Orwell.

State-of-the-Art analysis and report on current knowledge graph implementations and frameworks which could be considered as a preprocessing step to benefit data preparation for machine learning tasks. The project would include the implementation of a use case and validation with an open source data set. The goal is to show that usage of a knowledge graph improves accuracy for predictions and provides better benchmarks compared to the baseline.

The idea is to use a knowledge base and model characteristics of fake news in order to detect them in free text. The approach includes building a knowledge base with one of the existing implementations (e.g. virtuoso, alegro graph, etc.) or building a simpler model from scratch in prolog. The knowledge base can be used to validate statements and detect incosistancies which can be classified as fake news. A dataset will be used to validate the implementation.

How can we combine Neural Networks with Logic Rule-based Systems built on Description Logic and which benefits could be gained by extending NN with DL? This could be a research report on current attempts to define rules in NN in order to reduce complexity and improve predictions or a proof of concept implementation to show how to use logic rule restrictions or a semantic rule-based language such as SHACL or ShEx in simple NNs.

Context uncertainty in distributed self-adaptive systems requires a multi decentralised adaptation agents that could possibly adapt to changes in distributed systems. Learning in decentralised multi-agent environment is fundamentally more challenging than the employment of single agent. DMARL faces serious problem like having non-stationary state, high dimensionality of the observation space, multi- agent credit assignment, robustness, and scalability. This project investigates the possibility of employing DMARL in self-adaptive microservices architecture. Possibly this projects requires good understanding of Ray framework a scalable distributed reinforcement learning framework (https://ray.readthedocs.io/en/latest/rllib.html) or Intel Coach (https://github.com/NervanaSystems/coach).

Because of the uniqueness of streaming data found in distributed systems, the design of self-healing microservices architecture should meets the following requirements:

(1) The system should be able to operate over real-time data (no look-ahead). No data engineering is possible as the data is collected at realtime.

(2) The algorithm must continuously monitor and learn about the behaviour of the microservices cluster.

(3) The algorithm must be implemented with an automatic unsupervised learning technique, so it can continuously learn new behaviour and anomalies in real-time.

(4) The algorithm must be able to adapt the changes of the operating environment and provides adaptation strategy that can be orchestrated over the cluster nodes.

(5) The algorithm should be able to detect anomalies as early as possible before the anomalous behaviour is interrupting the functionality of the running services in the cluster.

(6) The proposed model should minimises the false positives (False Alarms) rate and the false negatives rate. If the system identifies a normal behaviour as an attack, this attempt should be classified as a False Positives (False Alarm).

(7) The proposed model should offer a high detection rate, better accuracy and a lower false alarm rate.

(8) The proposed model should offer consistence adaptation strategy, and preserve the cluster state and it should offer the architecture with a roll back (auto recovery) strategy in case the adaptation action failed.

(9) One important aspect of a self-healing Microservices architecture is the ability to i) continuously monitor the operational environment, ii) detects and observes anomalous behaviour, and iii) provides a reasonable policy for self-scaling, self-healing, and self-tuning the computational resources to adapt a sudden changes in its operational environment dynamically at rune-time.

Spiking Neural Networks (SNN) are a rapidly emerging research of data analytic. SNN is inspired from the brain process of sequential memory. SNN might be able to handle complex temporal or spatial data, in dynamic environments at low power and with high effectiveness and noise tolerance. The success of deep learning comes with cost of using brute-force algorithms and power hungry GPUs, in addition to the issue of slow model training and the limitation of each model to specific domain of MDP environments. SNN could get benefits from the advances made in evolution and cognitive neuroscience to be employed in the domain of IOT and multi sensors networks. This project aims to investigate the possibility to implement SNN in a simulated IOT platform such as CPUCARBON http://www.cupcarbon.com.

Activity-awareness in mobile computing has inspired novel approaches for creating smart personalised services/functionalities in mobile devices. The ultimate goal of such approaches is to enrich the user’s experience and enhance software functionality. One of the major challenges in integrating mobile operating systems with activity aware computing, is the difficulty of capturing and analysing users generated content from their personal handsets/ mobile devices, without outweigh their privacy and securing the collected sensitive data. Although conventional solutions exist for collecting and extracting textual contents generated by users in mobile computing applications, these solutions are most unsatisfying when it comes to personal integrity of the user. All previously known conventional solutions comprises collecting the user’s generated content from various applications such as e.g. an email client and/ or Short Message Service (SMS).Unfortunately, all of those applications are introduced to the user after exposing and sharing his/ her personal data to a web services located outside the mobile device, e.g. in the cloud. In addition, the collected information is stored outside the user’s personal mobile device in some remote server.

These serious drawbacks make many users reluctant to use the described conventional solutions. However, there is still a request for personalised pro-active services functionalities. Activity-aware computing enables mobile software to respond proactively and effectively to user needs based on the contextual information found in the environment where they operate. The ultimate goal of activity-aware computing is to automatically extend the application behaviour/structure based on the activity being performed by the user or software components. In this project, we are investigating a model for collecting user-generated content from the mobile’s OS messaging loop, feeds the collected context information into an experience matrix based on the sparse distributed model. The model offers the device a runtime representation of the current context model, which can be used to predict the user activity.

Co-design has its roots in the Participatory Design techniques developed in Scandinavia in the 1970s. Co-design reflects a fundamental change in the traditional designer-client relationship. A key tenet of co-design is that users, as ‘experts’ of their own experience, become central to the design process. Co-design is a multifaceted process with multiple stakeholders. Co-Design requires support for training participants in Co-Design Methods. It requires support for project management across the project lifcycle. It requires support to manage the deliverables of these projects, for example code sharing.This project addresses the development of a Co-Design toolkit to provide these supports across the processes of Co-Design. What are useful components of this Toolkit. Can their effectivness in assisting the development process be measured? Do they help meeting the learning outcomes of Team projects on Undergraduate computer Science courses.

Teaching AGILE software development faces many challenges especially in the development of approriate exercises and activities around different aspects of the methodology. For example what is the best way to introduce the different roles involved, such as Scrum Master, Scrum Coach Product Developer. How can the user feedback be embedded in the process. This Project looks at the design of a suite of supporting exercises built around the iteartions of a specific App development to realise the learning outcomes of a course on Agile development..

The EU Web Accessibility Directive of 2016 requires public bodies of memebr states to ensure their websites and apps are accessible to persons with disabilities. All websites created after that date will have to be accessible by 23 September 2019. Existing websites will have to comply by 23 September 2020. All mobile applications will have to be accessible by 23 June 2021. The Web Content Accessibility Guidelines of the W3c. org have also recently been upgraded to version 2.1 with new checkpoints related to cognitive challenges amongst others. This project looks at developing effective auditing strategies to ensure compliance with these guidelines using both automatic tools and manual processes.

The issue of applications which are based on Big Data excluding those on the edges of this data has become a rising topic. For example CV screening programs which use Machine Learning can discriminate against those with disabilities or on grounds of gender becuae the data used doesnt sufficiently represent those populations., August bodies such as the World Economic Forum have highlighted this. IBMs Fairness 360 toolkit is an example of an initiative which looks at this problem.This project examines ways in which populations are excluded for example through outlier removal in ML pre-processing. It looks to develop metrics for inclusion similar to those which have been established for data fairness and to devlop ways to calculate these measures. It looks to develop algorithms for greater inclusion for these applications.

Is Augmented Reality a viable technology for builiding usable Tourist Information systems. Can it be used effectively to combine geolocation andmulti modal content presentation to enhance the tourist experience?

Project ideas by lecturers 

Proposal INFO and Submission

Submit as a pdf file linked in this form

Proposal submission requirement (admission)

Students can submit their proposal and, if accepted, proceed to the dissertation once they have successfully completed all core modules* (that is 8 modules). They can take their dissertation module with their option modules. While it is not recommended taking 2 option modules at the same time as the dissertation, they are permitted to do this.

Note that: if the dissertation proposal does not reach a viable standard and gain approval, the student can exit with a Postgraduate Diploma in Computing or can resubmit a new dissertation proposal within two semesters of completing all the modules.

(*): Students who started from TU256 programme and are following the 1-year path to complete the MSc, might be allowed to proceed to the dissertation with one core module pending (this can't be Research Design though). Please contact the coordinator before submitting your proposal to confirm approval to submit.

Proposal Submission procedure

The proposal needs to be submitted as a pdf file linked in this form. The filename of the pdf MUST be: StudentNumber_name_surname_MSc_Proposal.pdf. 

Note: proposals that do not respect the template and the max number of words, for each section, are not taken into consideration for evaluation. All the required information is compulsory.

Proposal Submission Outcome

What is NOT a proposal

If your project fits into one of the following scenarios, your dissertation proposal will NOT be approved

Proposal Template

Latex template

You can use the same latex template employed in the Research design and proposal writing module. Download it here.

Detailed formatting

Front-page

The proposal

Proposal Submission Checklist

This check list should be used by students who are submitting the dissertation proposals to the Dissertation Coordinator. It will assist students in completing the research proposal.

First Meeting with Your supervisor

The following check list is to assist the MSc Dissertation student and their supervisor to prepare for their first meeting.

Student

Supervisor

Topics to Discuss & Agree at First Meeting

Responsibilities

Note that it is your responsibility to submit a coherent manuscript. Although your supervisor might be happy with your work, it is the final manuscript that will be examined by other lecturers, so you are the only one responsible of the final mark. Try to maximise the individual marks as outlined by the second criterion of the marking scheme.

Marking Scheme

First order criterion

Distinction 80+%

The practical work is of such a high professional standard that it could be distributed without significant extra effort.

The research makes a significant contribution to the chosen field and is worthy of publication.

Merit (first): 70-79%

Extremely strong internal consistency making the project a convincing whole which addresses the original research question. Evidence of originality. Impressive use of information gathered to support argument. Critical awareness of strengths and limitations

Good (2.1): 60-69%

Evidence of internal consistency which relates to original question. Very good use of information gathered to support argument. Awareness of strengths and limitations

Fair (2.2 ): 50-59%

Evidence of internal consistency which relates to original question but with some weaknesses in the integration of different sections. Use of information gathered but with some weaknesses in the integration of evidence. Some awareness of strengths and weaknesses

Pass (third): 40-49%

Limited evidence of internal consistency which relates to the original with significant weaknesses in the integration of different sections. Limited use of information gathered to sustain the argument with significant weaknesses in the integration of evidence. Limited discussion of strengths and weaknesses.

Fail: 0-39%

Lack of internal consistency. Very limited use of information gathered to sustain the argument with serious weaknesses in the integration of evidence. No awareness of limitation of the dissertation.

Second order criterion

Composition, organisation and expression; Use of language; Referencing (10%)

Introduction and rationale; Formulation of research question/problem; Focus (10%)

Literature review; Range of reading; Relation to research question; Independent research (20%)

Design of solution; Critical awareness, analysis, use and evaluation of relevant theory; Rationale for research solution/approach; Information gathering and analysis; Awareness of strengths and limitations. (15%)

Analysis & evaluation of deliverables; Awareness of strengths and limitation of findings (20%)

Conclusion; Contribution and Impact; Future work and recommendations (10%)

Complexity, Originality, significance, applicability, dissemination (5%)

Verbal presentation and defense (10%)

Dissertation structure, TEMPLATES, AND checklists

The dissertation completed will be a substantial piece of written work in the region of 60/70 pages (approx 20,000 words) of core chapters with a maximum page limit of 120 pages overall. It is important to note that the length of your dissertation will depend on the topic and material that you are including and remember quality is far more important than quantity.

Templates (word and latex) 

Structure


Initial structure

Chapter 1 – Introduction

This should be a short account of why you undertook the investigation, what the general state of knowledge was at the time you started. Why you asked the questions that your research/observations were expected to answer. It should state your research question and briefly introduce the research undertaken. A brief reader’s guide to the dissertation should be included.

Hints

Checklist

Chapters 2 – Literature review and related work

It is essential that this should be a critical review in which the various papers are compared and in which you express your own opinion of the conclusions that may be drawn and to do your best to reconcile discrepant results in favour of one or other set. Provide a summary at the end of the sections or of the whole review. Remember that the content of this chapter must be relevant to the actual research carried out; it is not a “brain dump” of everything you have read. You must demonstrate analysis and synthesis of the literature. Also in some cases it may be necessary to divide the state-of-the-art into two separate chapters; one covering the application domain, and the other the technologies, or one describing the background/context of your research and one on the state-of-art for your specific issue.

Hints

Checklist

Chapter 3 – Design and methodology

The general structure of the study should be described clearly. The comparisons that are going to be made, the controls and technical details etc. should be included if appropriate. This chapter might report on the design of a software-based solution or an experiment using existing datasets). Also the chapter includes the methodology/ies adopted for designing the solution and for evaluating it (eg. errors, performance measures, accuracy, ROC curves, t-tests, correlations etc.). This chapter should include your ethical considerations. You should try to demonstrate as much as possible your commitment to conducting research in an ethical and responsible manner. If there are no critical issues identified, this should also be highlighted at this section. Please refer to the Ethical Considerations section for more information.

Hints

Checklist

Chapter 4 – Results, evaluation and discussion

Depending on the nature of the project, this chapter will describe the actual work carried out e.g. any experiment undertaken or system implementation following the theoretical description of the design of chapter 3 (design). This is the most important section of your dissertation.  Also it should focus on the presentation and discussion of the findings in the light of what is already supposed to be known, from the literature conducted in chapter 2. It should show how findings confirm or refute the research hypothesis and how they differ with previous work in the literature. Do not use this section for another review of the literature.

Hints

Checklist

Chapter 5. Conclusion

This should be a short account of the results of your work, emphasising mainly what is new. There should be a close correlation between this chapter and chapter 1, in which you described the problem you were addressing. It is advisable to deal with the limitations of your research at this stage and to suggest here what further work might be done. This is the appropriate place to do a self-assessment of your research.

Hints

Checklist

References (using [APA6 Referencing style])

References should be consistently cited in the text. The references in the Reference List at the back of the dissertation should be listed in alphabetical order. The should also be complete so that the reader wanting to locate a particular reference has all the information necessary to do so (including page numbers, volume, issue).

Appendixes

These should contain supplementary material that is not necessary in order for the reader to follow the argument. For example, the text of a questionnaire, detailed UML diagrams, or a complete Software Requirement Specification should be places in an Appendix. It is not considered necessary to include code, but you may do so by including a link within your dissertation PDF.


Reminders

Formatting style (compulsory)


Preparing the dissertation

After you have completed the first draft

Ethical considerations

If you plan to collect data as part of your MSc dissertation research, you need to contact the coordinator early. There are time constraints in getting ethical approval which may not be be feasible within your dissertation timeline. However, if you think data collection is an important part of your work, please contact the coordinator as soon as possible to evaluate the feasibility of your proposal. 

If you are planning to use existing datasets you will likely not need to apply for ethical approval, but you should take the same critical evaluation of others people data collection and management process. For example, was the data ethically sourced? Are the records anonymised or pseudo-anonymised? Was informed consent obtained? Etc. This should be addressed in your dissertation in the design chapter in a similar way you identify possible limitations in your project.

dissertation for evaluation and Submission

Submission of dissertation is a pdf of the text and plagiarism report file emailed to the Dissertation Coordinator (lucas.rizzo@tudublin.ie)


Submitting your electronic copy of dissertation (PDF)


Plagiarism report

You will need to submit a plagiarism report with your dissertation. In order to create a plagiarism report you need to enrol in the following Brightspace module: 

Module Name: Research Proj and Dissertation SPEC9999: 2023-24
Module Code: SPEC999924055TU060-2324

Under Assessment - Assignment you will find a submission box entitled "MSc Dissertation submit for plagiarism report”. You should upload your final pdf here. You can print the plagiarism score as a pdf and send it to the coordinator with your final dissertation.

Presentation of dissertation

About the presentation

Presentation Structure

Publishing the final dissertation

After presentation, the final submission is a pdf emailed to the Dissertation Coordinator (lucas.rizzo@tudublin.ie)

Publishing your final dissertation

If your dissertation is at a first or upper second class honours standard (i.e. =>60) it will be published (by the MSc coordinator) and made available in TU Dublin’s library repository. If you do not want your dissertation to be publicly available, please inform the Dissertation Coordinator.

Checklist

The following check list is to assist the MSc Dissertation student to complete their final version of the dissertation. You will be contacted by your supervisor after the presentation about what changes, if any, are needed to your dissertation. You will only have a short time period to make these changes.

The Spine of the dissertation should contain

The front cover of your dissertation should have

Note that: Hard-bound copies of your dissertation are no longer being stored by TU Dublin. So you are only required to print a hard bound copy of your final dissertation to your supervisors, if they request it.

Submitting Your Final Dissertation (PDF)


Copyright, deferrals, failure and fees

Copyright

In accordance with the TU Dublin IP Policy, TU Dublin recognises that students who create IP own the products of their intellectual efforts. TU Dublin will not use any of the content in your dissertation.

Deferral

Applications for deferrals of the Dissertation module can be made by following the Deferral procedures available at here.

Requests for deferrals should be accompanied by supporting documentation, e.g. medical certificates or other appropriate documentation which give appropriate reasons for the deferral request. Except in exceptional circumstances, all students who defer the Dissertation are expected to take a new Dissertation topic and submit a new Dissertation proposal.

Part time students who defer their dissertation within 4 weeks of the start date of the semester in which they are completing the deferral, can apply to have their fee carried over to the following semester. Except in exceptional circumstances, part time students who defer outside of this period will be expected to pay to take the Dissertation module again.

In place of deferral, students with certified medical circumstances can apply for an extension of the Dissertation deadline for the length of time of the certified illness. Students can apply for an extension through their supervisor and informing the coordinator.

The time frame to complete the MSc in Computing part-time is 6 years. A student should submit a proposal for the dissertation module within one year of completing the other modules. If a student does not do this, s/he is expected to exit with the Postgraduate Diploma (PgDip).

Failure

If you fail the dissertation you need to make a formal written application, to be sent to the coordinator, specifying that you wish to repeat the dissertation module.

Additionally, if you are approved to repeat you will need to pay the appropriate fees (see this page). You will be also expected to submit a new proposal on a new topic and if approved start a new project. A new supervisor will be assigned to you.

Repeats of dissertations are handled according to the TU Dublin regulations as outlined in the General Assessment Regulations

Fees

Once you have received a formal email indicating the approval of dissertation proposal, and a supervisor has been assigned, you will be registered for the Dissertation module.

If you are a full time student the fees for the dissertation module are covered in the fee you paid for the MSc programme.

If you are a part-time student, you will be registered for the Dissertation module and you need to pay the related fees for that module.

For further information about fees, please refer to the TU Dublin Fees and Grants information.