MSC Dissertation
In order to qualify for the award of an MSc. a student is required to propose, design and undertake a detailed study of a topic relevant to Computer Science, in their area of study (ASD, DA) and complete a dissertation. The student must be able to complete a body of work appropriate for a taught Master’s degree and should include an amount of originality. The sections below outline the practical steps and requirements that you will need to undertake the dissertation from planning your proposal to submitting your final dissertation.
The dissertation coordinator is Dr Lucas Rizzo.
Material should be submitted to lucas{dot}rizzo[at]tudublin(dot)ie
Key Information on the MSC Dissertation process
Aim of dissertation
The aim of the dissertation is to enable students to apply the skills and knowledge gained during previous stages of their study to the analysis of a specific problem or issue relevant to their programme of study (ASD or DA), via a substantial piece of independent work carried out over a 3 month (one semester) period. Further, the aim is for students to demonstrate proficiency in the design of a research project, application of appropriate research methods, collection and analysis of data, and presentation of results.
Dissertation manuscript
The dissertation completed will be a substantial piece of written work in the region of 60/70 pages (approx 20,000 words) of core chapters (Introduction chapter – Conclusion chapter), with a maximum page limit of 120 pages overall. It is important to note that the length of your dissertation will depend on the topic and material that you are including and remember quality is far more important than quantity.
Examination and marks
Your dissertation will be examined by a panel of lecturers, chaired by the dissertation coordinator. The marking scheme that will be used can be found below. If you pass the dissertation and correctly submit the final version, you will be awarded a master degree at the next available graduation ceremony. If you fail you will be awarded a postgraduate diploma at the next available graduation ceremony.
Deadlines
Note: this table remains consistent across all years, with modifications only to ensure that days do not align with weekends, bank holidays, and similar events.
(*) These dates apply to part-time students only
(**)This dissertation option is only available to full time students who have applied and received approval. Requests for approval are made to the Dissertation Coordinator in writing and need to be supported by appropriate reasons. Progression to project is also subject to availability of a supervisor who is willing to supervise over the Summer.
(***) The exact list and order of presentations will be schedule roughly 10 days before the scheduled days, shortly after the submissions of the last batch of theses. Presentations will be during the day, and not in the evenings as they are official TU Dublin examinations. Planned dates are usually available 6 months in advance, to allow students to plan accordingly. These dates cannot be changed for a number of reasons including room availability, supervisor, second reader and coordinator’s duties as well as constraints dictated by the TU Dublin exam office.
Research topic ideas
The following are suggested research topics provided by lecturers in the school and you are welcome to create your own project idea or use these suggestions as inspiration to define a new topic for your dissertation. If you are interested in any of these topics you should contact the relevant lecturer who has written the project idea to find out more information or to discuss how you might approach the topic for your proposal. Some topics may be high level and multiple students could take different approaches to working on the topic but others are specific and you need to check if there is already another student working on the research problem.
Please note: You cannot choose your supervisor. Supervisors will be allocated centrally following the submission and evaluation of dissertation proposals. If you have worked with a particular lecturer on a project proposal, you can indicate that you would like to work with them as part of your submission, however there is no guarantee that they will be assigned as your supervisor as this may not be possible.
Individual projects
COP28 Open Data Warehouse to provide datasets to monitor compliance with climate goals from COP28. (Dr. John Gilligan)
This project will look at the development of an open data resource to support ML based compliance auditing of countries such as Ireland to meet the COP26 climate goals. Through Literature surveys and engaging with stakeholders the project will investigate what is required for this task in terms of both data resources and data analytics. From this it will identify data sources both local and international to support this work. Appropriate pre-processing and analytic techniques from data Science will be trialled through case studies. This project will establish the requirements of hosting this data according to open data standards. These will include how best to manage the data integrity of stakeholders and will propose a legal framework around the sharing of data. It will look at the technical details of storing and sharing data through warehousing, API construction and secure access.
Improving Interpretation of Child Elbow X-rays for clinicians across the Globe (Dr. John Gilligan)
In some high income countries and many low to middle income countries the ability of health care staff to read elbow x-rays of children is poor. There is a standardised process for the reading of an elbow x-ray which is related to the child’s age and very specific lines to observe (e.g. anterior humeral line or radiocapitellar line). The goal is to evaluate the feasibility of a smartphone application to assist the clinician to provide real time with the interpretation of an image. 2 options are proposed: 1. Taking a photograph of the image and upload it to the application to assists in the provision of the lines and areas of interpretation. 2. Hold the smartphone (whilst in an application) to an image and allow the application to provide the lines and areas of interpretation to assist the clinician reaching the correct diagnosis.
GeoAI for Land Use Classification (Dr. Bianca Schoen Phelan)
The method of producing an x-ray in low and middle income countries may be electronic or acetate (figure 1) and is electronic (figure 2) in high income countries.This project aims to develop a deep learning model for land use classification using satellite imagery and GIS data. The first step is to identify suitable data sources, for example for sentinel2, etc. and finding corresponding GIS information, including land use labels. The methodology involves training a convolutional neural network (CNN) to extract spatial features from the imagery and integrate GIS data as input features. The model tries to be capable to classifying land use categories, such as urban, agricultural, and forested areas. This project has practical applications in urban planning, environmental monitoring, and natural resource management.
Traffic Flow Prediction with Spatial Deep Learning (Dr. Bianca Schoen Phelan)
This project focuses on predicting traffic flow and congestion patterns in urban areas using deep learning and GIS. Traffic data, such as provided by TII, could be used to facilitate this project. The methodology involves building a recurrent neural network (RNN) model that integrates spatial information from GIS data, such as road networks and traffic signal locations. The model will predict traffic congestion, helping commuters with route planning and city authorities with traffic management strategies.
Satellite Image Super-Resolution for Environmental Monitoring (Dr. Bianca Schoen Phelan)
This project addresses the enhancement of satellite imagery resolution for improved environmental monitoring. We will leverage a dataset of multi-resolution satellite images along with GIS-based terrain and land cover information. The methodology involves training a deep learning model, such as a Generative Adversarial Network (GAN), to upscale lower-resolution satellite images while preserving critical details. The high-resolution images will enhance the accuracy of applications like land cover analysis, disaster response, and ecological studies.
Geospatial Image Captioning with Deep Learning (Dr. Bianca Schoen Phelan)
In this project, we explore the development of a deep learning model for generating natural language descriptions for geospatial images. The required dataset includes geospatial images with associated location and environmental data. The methodology involves using a combination of Convolutional Neural Networks (CNNs) to analyze images and Recurrent Neural Networks (RNNs) for language generation. By integrating GIS data, the model can provide contextually relevant and detailed image captions. This technology benefits navigation systems, accessibility for visually impaired individuals, and location-based content generation.
Real-time Forest Fire Detection using Remote Sensing and Deep Learning (Dr. Bianca Schoen Phelan)
This project focuses on the early detection of forest fires using remote sensing and deep learning. The required dataset includes satellite imagery and historical wildfire data. The methodology involves training a deep learning model, such as a CNN or other suitable architectures, to detect smoke plumes and other fire-related patterns in satellite images. The system could also provide real-time alerts and wildfire locations, enabling rapid response by firefighting teams. This project contributes to improved forest fire management and environmental conservation efforts.
Reinforcement Learning to reduce energy consumption (Dr Luis Miralles)
Machine Learning (ML) has become very popular in the last decades because it allows creating models using previous information with little human intervention. ML has been widely used for classifying and predicting values. Nevertheless, this project is focused on a branch of ML called Reinforcement Learning (RL). In RL there is an agent that moves from one state to another state getting a positive or a negative reward. The agent has to learn which are the best actions for each state by maximizing the total rewards from the initial to the final state. This is called the optimal policy. RL has made breakthrough achievements in the last few years such as beating professional players in the game of GO which remained as one of the biggest challenges in Artificial Intelligence. The project is aimed at optimizing the best possible combinations for reducing the energy bill of an industrial company. To this end, we will simulate the energy consumption of a certain company and then, we will implement an RL algorithm like Deep Q-Learning to find out which are the best actions at each point to reduce the energy consumption as much as possible.
Reinforcement Learning to optimize the Governments actions to mitigate the COVID pandemic (Dr Luis Miralles)
Whenever a county is hit by a pandemic, Governments must take the best actions to guarantee the public health of the country but also reduce the negative impact of the pandemic on the economy. The Government can define different phases that can go from more to less restrictive. But it is crucial to develop tools to help them in taking the best planning for the phases given the priorities of the government. In this work, we develop a model based on Reinforcement Learning (RL) to help governments to combat viruses’ pandemics like the one of COVID. To this end, we will implement an SEIR model to represent the spread of the virus on the population. We will compare our approach based on RL with other optimization approaches such as Genetic Algorithms or Ant Colony. We expect to provide new insights on this topic to help countries managing pandemics in a better way.
A human-in-the-loop solution for the construction of explainable AI systems (Dr. Lucas Rizzo)
The recent increased use of “black-box” systems in Artificial Intelligence has led to an issue of trustworthiness and understandability between end-users operating such systems. The inability to explain the inferences of these systems severely affects their applicability. Theoretically, approaches exist that enable interpretability and understandability of “black-boxes” systems by humans. However, these are difficult to be implemented in practice due to the high amount of deductive, declarative knowledge required by experts. This project is aimed at tackling this problem by taking advantage of the precision and accuracy that can be achieved by machine learning in modelling tasks and the explanatory capacity of defeasible argumentation, a novel emerging paradigm in AI. This paradigm is a technique for modelling reasoning under uncertainty by using the notions of arguments and contradictions among them, routinely employed in human reasoning activities.The proposal is to automatically extract arguments, essentially propositional rules, from “black-box” models learnt with machine learning. In turn, potential contradictions (attacks) among extracted arguments are proposed to be heuristically identified. This network of arguments and attacks can be validated by one or more end-users, maintaining the interpretability and understandability of the proposed solution. At the same time, quantitative inferences can be produced by this same network of arguments with transparent algorithms from the literature of defeasible argumentation. It is expected that these inferences will present similar predictive power to the exploited machine learning models.
Visualising latent space in autoencoders for Gravitational Waves detection (Dr. Luca Longo)
Gravitational waves are disturbances in the curvature of spacetime, generated by accelerated masses, that propagate as waves outward from their source at the speed of light. Detecting these types of “rare” events is of extreme importance. This project will focus on the construction of a Deep Learning autoencoder for the automatic identification of gravitational waves. In detail, it will focus on the construction of a visualisation layer of the latent space (learned by the autoencoder) that will help human understand the salient hidden patterns inherent to gravitational waves.
Explainable Artificial Intelligence: Wrapping machine learning models with argumentative capabilities (Dr. Luca Longo)
Argumentation has advanced as a solid theoretical research discipline for inference under uncertainty within Artificial Intelligence. Scholars have predominantly focused on the construction of argument-based models for demonstrating non-monotonic reasoning adopting the notions of arguments and conflicts. However, they have marginally attempted to examine the degree of explainability that this approach can offer to explain inferences to humans in real-world applications.
This proposal concerns the application of argumentation to wrap machine learning models with argumentative capabilities.
Machine learning for the creation of Computational models of trust (Dr. Luca Longo)
Computational Trust applies the human notion of trust to the digital world that is seen as malicious rather than cooperative.
Trust factors could be promising in assessing the trustworthiness of virtual identities interacting in an open environment. This proposal concerns the application of computational trust to online communities such as Wikipedia, Stackoverflow where content is created by humans collaboratively. In details, the goal is to create a model of trust to evaluate the trustworthiness of information in online communities. This can be done employing machine learning, either unsupervised or supervised.
Machine learning for mental workload modeling (Dr. Luca Longo)
Past research in HCI has generated a number of procedures for assessing the usability of interacting systems. In these procedures there is a tendency to omit characteristics of the users, aspects of the context and peculiarities of the tasks. Building a cohesive model that incorporates these features is not obvious. A construct greatly invoked in Human Factors is human Mental Workload. Its assessment is fundamental for predicting human performance. This proposal focused on empirical research on the investigation of which factors mainly compose mental workload and their impact on task performance. A user study was carried out with participants executing a set of information-seeking tasks over three popular web-sites (a dataset is ready). The goal is to investigate if Supervised Machine Learning techniques based upon different learning strategy can be successfully employed for building models of mental workload aimed at predicting classes of task performance and extract those factors that contribute the most to achieve this goal.
Deconvolutional neural networks for visualisation of representations (Dr. Luca Longo)
Deconvolutional networks are convolutional neural networks (CNN) that work in a reversed process. Deconvolutional networks strive to find lost features or signals that may have previously not been deemed important to a CNN’s task. A signal may be lost due to having been convoluted with other signals. The deconvolution of signals can be used in both image synthesis and analysis. This project will aim to build a deconvolutional neural network that work with EEG signals, and to visualise CNN learnt representations by back-projecting them to the original input space and allowing a deeper interpretation.
Extracting symbolic rules from Recurrent Neural Networks (Dr. Luca Longo)
Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree while having the advantage of being more transparent. This project will focus of creating a solution to the above problem with EEG data.
Autoencoders for noise reduction in EEG signals (Dr. Luca Longo)
An Autoencoder is a neural network, an unsupervised learning algorithm which uses back propagation to generate output value which is almost close to the input value. Different types of autoencoders exist: sparse, stacked, variational. Designing an architecture is itself a challenge as it includes the design of hidden layers that can be convolutional, dense and recurrent as well as the number of neurons and various hyper parameters. This project will focus on this challenge and in particular to the development of an autoencoders for noise reduction in EEG signals. This will be compared against the well-known Principal Component Analysis (PCA) algorithm.
Subjective mental workload and objective indicators of activity with Jquery (Dr. Luca Longo)
The demands of evaluating usability of interactive systems have produced, in the last decades, various assessment procedures. Often, in the context of web-design, when selecting an appropriate procedure, it is desirable to take into account the required effort and expense to collect and analyse data.
For this reason the notion of performance is acquiring importance for enhancing web-design. However, assessing performance is not a trivial task and many computational methods and measurement techniques have been proposed. One importance construct that is strictly connected to performance is human Mental Workload (MWL). (often referred to as Cognitive Workload). This study aims to assess subjective mental workload over web-based tasks and investigate its correlation with objective indicators of tangible activity of online users in the browser (mouse movement, scrolling, clicking, focus – using jQuery and Javascript).
Mental workload, learning and tag clouds (Dr. Luca Longo)
Mental workload is probably the most invoked concept in Ergonomics. In a nutshell it can be intuitively defined as the amount of mental work necessary for
a person to complete a task over a given period of time. Several MWL assessment procedures have been proposed in the literature. Example includes the subjective instruments NASA task load index and the Workload Profile. Similarly, learning can be quantifies in several ways. The aim of this project is to adopt tag clouds visualization techniques to quantify learning before and after teaching sessions and to relate it to mental workload indexes.
On the relation between cognitive load theory and learning’ (Dr. Luca Longo)
Cognitive Load Theory (CLT) has been proposed as a means for designing instructional material and delivering it in a way that learning is maximized. Under this perspective, it is evident that CLT becomes extremely important in third level education. An adequate mental workload is a condition in which there is a balance between the intrinsic difficulties of a task/topic, the way it is presented to the learners (extraneous load) and the amount of effort performed by the learner to allocate the new knowledge into the old one (germane load). However, the experience of cognitive workload is not the same in every single individual, varying according to cognitive style, education, and upbringing. The aim of this project is to use CLT jointly with computational methods for mental workload representation and assessment to quantify the mental load imposed to learners by teaching activities and instructional material. The assumption is that if learners experience an optimal cognitive load of a class, their learning is optimized. It turns out that, if their cognitive load can be quantified after any teaching session, this quantity can be used as a form of empirical evidence to select the most effective teaching method and/or instructional material in a given context and for a given audience.
The projects aims to investigate the relationship between quantified cognitive load of learners and their quantified learning during teaching sessions.
On the relationship between usability and mental workload of interfaces (Dr. Luca Longo)
The demands of evaluating usability of interactive systems have produced, in the last decades, various assessment procedures. Often, in the context of web-design, when selecting an appropriate procedure, it is desirable to take into account the required effort and expense to collect and analyse data. For this reason, web-designers have tended to adopt cheap subjective usability assessment techniques for enhancing their systems. However, there is a tendency to overlook aspects of the context and characteristics of the users during the usability assessment process. For instance, assessing usability in testing environments is different than assessing it in operational environments. Similarly, a skilled person is likely to perceive usability differently than an inexperienced person. For this reason the notion of performance is acquiring importance for enhancing web-design. However, assessing performance is not a trivial task and many computational methods and measurement techniques have been proposed. One importance construct that is strictly connected to performance is human Mental Workload (MWL). (often referred to as Cognitive Workload). Several MWL assessment procedures have been proposed in the literature but a measure that can be applied for web-design is lacking. Similarly, recent studies have tried to employ the concept of MWL jointly with the notion of usability. However, despite this interest, not much has been done to link these two concepts together and investigate their relationship.
The aim of this research study is to shed light on the correlation of these two concepts and to design a computational model of mental workload assessment that will be tested with user studies and empirically evaluated in the context of web-design.
Enhancing Decision Making with Argumentation Theory (Dr. Luca Longo)
Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing decision-making. This project aims to study the impact of defeasible reasoning and formal models of argumentation theory for supporting and enhancing decision-making. Multiple fields of application will be tested against state-of-the-art approaches: decision-making in health care, multi-agent systems, trust and the Web.
Enhancing the representation of human Mental Workload with Argumentation Theory and defeasible reasoning (Dr. Luca Longo)
Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing knowledge-representation. This project aims to study the impact of defeasible reasoning and formal models of AT for enhancing the representation of the ill-defined construct of human mental workload (MLW), an important interaction design concept in human-computer interaction (HCI). The argumentation theory approach will be compared against other knowledge-representation approaches.
Computational Trust and automatic assessment of trust of online information (Dr. Luca Longo)
The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is an emerging discipline within Artificial Intelligence. It is aimed at increasing the reliability, trust and performance of electronic communities and online information. Computer science has moved, in the last decades, from the paradigm of isolated machines to the paradigm of networks and distributed computing. Similarly, Artificial Intelligence is quickly shifting from the paradigm of isolated and non-situated intelligence to the paradigm of situated, collective and social intelligence. This new paradigm as well as the emergence of the information society technologies are responsible for the increasing interest on trust and reputation techniques applied to public online information, communities and social networks. This study is aimed at investigating the nature of trust, the factors that affect trust of online information and the design of a computational model for assessing trust. This will be evaluated in empirical terms with user studies involving several online web-sites and people.
Online Community dynamics and behaviour (Dr. Pierpaolo Dondio)
This research investigates the behaviour and dynamics of online communities, including models of reputation and trust. It focuses specifically on the investigation of how financial online communities react to market crashes and the predictive power of such communities. This applied research makes use of a multidisciplinary set of techniques such as data- and text-mining techniques, along with econometric approaches, network analysis, agents, user modelling and trust.
A data analysis approach on multi-language versions of Wikipedia (Dr. Pierpaolo Dondio)
Is Wikipedia (positively or negatively) biased against some cultures?
Wikipedia is present in more than 250 languages. It is reasonable to think that each Wikipedia version represents the point of view of a specific culture and country. Even when a common topic is presented across multiple versions (written in different languages), it is reasonable to think that the importance and emphasis given to the topic depends on the cultural background of the writers of the articles. This variable importance could be negligible for some neutral topics, but it could be quite significant for controversial topics or topics strongly representative of cultures. For instance, it is likely that an article about Oscar Wilde is better described in the English version than in the Russian, while a major Russian writer such as Checkov or Tolstoj could have more relevance in the Russian Wikipedia, likely with a higher importance than Shakespeare. However, Wikipedia guidelines underline how Wikipedia articles should follow a neutral point of view, where every topic is presented in a fair, balanced and objective ways.
Therefore, a question aries: are the differences among version of Wikipedia still compatible with a neutral point of view or they are a consequence of a bias towards some topics and cultures? The project aims to collect a set of topics common to different Wikipedia version, and quantify the bias that each version has towards its own culture or other cultures. After having selected a group of Wikipedia version, a selection of articles will be defined and extracted from a Wikipedia dump. The techniques used will include social network analysis and data analysis. No text analysis is required. The core idea will be to design and quantify a level of importance of an article in a each Wikipedia version, and then check if the difference of this level of importance across multiple versions is statistically significant or it is tolerable.
Using StackOverflow for Teaching and Learning. (Dr. Pierpaolo Dondio)
In the field of Computing, online communities of practice such as Stack Overflow have great potential for educational purposes. Michael Staton suggested technology firms now consider Stack Overflow to be “the new Computer Science Department where people go to learn”. A recent survey conducted among Irish lectures and students revealed how more than 75% of lecturers have already used Stack Overflow, 82% admit to have learned something from it, half of them think SO could be used in teaching Computing, 30% thinks Stack Overflow explains concepts better than a University textbook, 35% of students think that SO always or often explains concepts better than their lecturers. However, both students and lecturers complained about the lack of structure and organization of the StackOverflow material. If this material has to be used for education (and not only for quick reference), it must be better organized, good stuff should be filtered, it should be easier to navigate and the learner should understand and be guided through the pre-requisites of each question. Computing is a highly interconnected discipline. For instance, writing a simple PHP script requires an understanding of HTML, DB connection, SQL, and bash scripting. If we are to bear learner in mind, the most crucial thing is to provide her/him with a compass to safely navigate the learning material. Moreover, not all the content of StackOverflow is suitable for learning. Some of the Q&A are quick cut&paste code fixing, while other pages are complete and sound discussions and presentations of a Computer Science topic that can have a great potential for educational purposes. The aim of the project is to provide tools to leverage and better exploit the learning potential of Stack Overflow. The following are few ideas that the project might explore. First, using text mining and data analytics techniques, the project aims to separate the more interesting material for teaching purposes from the pure technical tips and tricks content. Second, using text analytics and visualization techniques, the project could visualize a network of Computer Science concepts emerging from the content of StackOverflow, and show the links and pre-requisite between them. The network can be used by a learner as a compass to understand the links between concepts, to understand which concepts are the fundamental ones and to navigate among the questions in a meaningful way.
Finally, given a topic (such as “database normalization” or “file reading in C” or “inheritance in Java”), the project could develop a tool that will select automatically a set of Q&A relevant to the topic and with didactical value, that can be used by a learner for his practice or by a lecturer to design a tutorial/lab session. In this scenario, the question can be posed to the learner keeping the answer hidden, and reveal the answer to the learner only when appropriate.
Video classification using sound features (Dr. Susan McKeever)
There is a huge volume of video uploaded to the WWW each day. The problem of viewing and classifying this is an ongoing one. Sound is very informative about the nature and type of video. e.g. Sports videos, with crowds roars will have their own distinctive pattern. This project is to investigate and test the ability to classify video using the video sound. It will use machine learning techniques to develop a classifier. The classifier will use sound features of the video, based on analysing and picking out features in the digital sound signal.
Formal Language Parsing (Damian Gordon)
This project would be to implement a parser for the Z Specification language, using the Haskell programming language. The parser should be compliant with the official Z standard as far as possible.
Swarm Intelligence (Damian Gordon)
This project is to investigate and develop a model of swarm intelligence. The basic architecture of a swarm is the simulation of collections of concurrently interacting agents: with this architecture, you can implement a large variety of agent based models. One interesting application is the modelling of crowd behaviour in emergency situations
Gait Analysis (Damian Gordon)
This project will focus particularly on kinematics (measurement of the movement of the body in space) using simple computer vision techniques to identify polynominal splines and create a “stickman” figure which can be overlayed beside the original actor using augmented reality techniques.
Using Puppy Linux to recover files (Damian Gordon)
Puppy Linux is a distribution that provides users with a simple environment, and can be used to recover files (as long as they aren’t NTFS). This project suggests that you extend the functionality of Puppy Linux to recover NTFS files.
Detecting Invisible Motion (Damian Gordon)
This project will look at MIT research at the intersection of vision and graphics who created an algorithm that offers its users a new way of looking at the world. The technique which uses an algorithm that can amplify both movement and colour, can be used to monitor everything from breathing in a sleeping infant, to the pulse in a hospital patient. Its creators, led by computer scientist William Freeman, call it “Eulerian Video Magnification” https://www.youtube.com/watch?v=3rWycBEHn3s
Building a visual knowledge base, for unsupervised image recognition (Dr. Susan McKeever)
Knowledge bases are key to being able to store and reuse knowledge. Examples are less structured bases such as Wikipedia, or more structured ones such as DPPedia or ConceptNet. This project is about building a visual knowledge base from a labelled image set, using object recognition – and building an associated knowledge base. The concept of image recognition can then be proved by matching unseen images through the knowledge base to see if they are recognised. A knowledge base that doesn’t appear to exist yet is one based on visual images. To explain the concept here, let’s build up the scenario. For example, if a human were to look at a picture of a street, they would be able to name objects in the images such as car, lamppost, house – and so on. The associated knowledge is that a street “contains” or is associated with cars, houses etc. The human uses this type of knowledge to recognise any street image. If we then pass many images through this process, useful knowledge can be generated . This knowledge, if captured, could then be used to recognise images in an unsupervised way – based on its content. E.g. unlabelled street image – contains cars/ street etc – so matches to “street”. In turn, this could support recognition of unseen images, similar to “zero shot learning”.
Extracting reusable knowledge from machine learning models (Dr. Susan McKeever)
Trained machine learning models contain knowledge. They have learned the patterns within the data that link to whatever the output task labels are – classification, object detection. Take for example, a machine learning model that predicts fraudulent insurance claims: It trains on instances of insurance claims that are labelled as “fraud/ not fraud” – and then the underlying algorithm define a link between input features/ values and output predictions. i.e. the model “knows” what aspect of the input claim is indicative of fraud or not. The knowledge in this example model is valuable- insurance companies would be very interested to confirm, from their data, what particular features are more indicative of fraud! To discover this knowledge, it should be possible to use feature selection, whereby the most important features of the model can be identified. Likewise, it may be possible to use explanations – whereby each decision of the model can be explained against its input features. This paves the way to gleaning reusable knowledge from machine learning models. This project is about researching the state of the art on extracting reusable knowledge from machine learning models of structure (tabular) data. To make the project more complete, the idea would be to also apply or even expand on a technique in some form of proof of concept, or experimental work. That really depends upon the findings in the state of the art.
Universal Design Code Parser (Damian Gordon)
The Seven Principles of Universal Design, modified by O’Leary and Gordon (2011) are: 1. Equitable Use: The design is useful and marketable to people with diverse abilities. 2. Flexibility in Use: The design accommodates a wide range of individual preferences and abilities. 3. Simple and Intuitive Use: Use of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level. 4. Perceptible Information: The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities. 5. Tolerance for Error: The design minimises hazards and the adverse consequences of accidental or unintended actions. 6. Use of Design Patterns: To make the code easier to understand, and easier to extend, use pre-exiting patterns. 7. Consider the User: Make the user the centre of the whole process. Understand the range of users of the system.
This project seeks to develop a parser that will scan Python code to determine how closely the code adheres to the above principles. NOTE: This is not looking at the user interface produced by code to assess it’s universal designedness, it is looking at the code itself, and seeing how universal designed the code is.
The Psychogeography Toolkit (Damian Gordon)
Psychogeography is defined as the “the study of the precise laws and specific effects of the geographical environment, consciously organized or not, on the emotions and behavior of individuals.” One of the typical approach is to draw a large circle at random on a map, and travel along that circle, commenting on what you see, what you hear, how it feels, (e.g. look at the metal drains, are there dates on them? look at shapes of the buildings, and how the telephone wires and electricity wires snake around them, etc.). This project will seeks develop a random route generator (circular), and create a means by which the experiences can be recorded, photographed, etc. and automatically formed into a webpage.
Shakespeare Apocrypha (Damian Gordon)
The Shakespeare Apocrypha is the name given to a group of plays (e.g. Sir Thomas More, Cardenio, and The Birth of Merlin) that have sometimes been attributed to William Shakespeare, but whose attribution is questionable for various reasons. Using Stylistic statistical-based metrics, e.g. Zipf analysis, Sentence Length, Sentence structure, words used, tense, infrequent n-gram occurrences, active vs. passive voice, etc. and the development of other suitable metrics as part of the project, similarities will be measured between the canonical tales and the apocryphal ones.
Building Learning Objects (Damian Gordon)
This project is to investigate and develop a series of Learning Objects (a unit of educational content delivered via the internet), using eith the IMS Content Packaging or the SCORM (Sharable Content Objective Reference Model) standard. As well as having the standard learning object parameters, the learning objects for this project will be aware of how learning style can effect presentation means.
NewSpeak Text Filter (and Translator) (Damian Gordon)
One of themes of George Orwell’s 1984 was that the government was simplifying the English language (both vocabulary and grammar) to remove any words or possible constructs which describe the ideas of freedom and rebellion. This new language is called Newspeak and is described as being “the only language in the world whose vocabulary gets smaller every year”. In an appendix to the novel Orwell included an essay about it, and the basic principles of the language are explained. The objective of this project is develop a text filter that will take in normal text, and convert it into Newspeak. An initial system will simply change the words in the text to their equivalent in Newspeak, e.g. “bad”, “poor”, “lame” all become “ungood”; “child”, “children”, “boy”, “girl” become “young citizens”; “quite”, “rather”, “kind of”, “kinda” become “plus”. From there the next stage is to investigate the more fundemental translation process, whereby the grammar and structure of the text is changed to the style as outlined by Orwell.
Knowledge Graphs for Machine Learning (Bojan Bozic)
State-of-the-Art analysis and report on current knowledge graph implementations and frameworks which could be considered as a preprocessing step to benefit data preparation for machine learning tasks. The project would include the implementation of a use case and validation with an open source data set. The goal is to show that usage of a knowledge graph improves accuracy for predictions and provides better benchmarks compared to the baseline.
Ontology-driven Fake News Detection (Bojan Bozic)
The idea is to use a knowledge base and model characteristics of fake news in order to detect them in free text. The approach includes building a knowledge base with one of the existing implementations (e.g. virtuoso, alegro graph, etc.) or building a simpler model from scratch in prolog. The knowledge base can be used to validate statements and detect incosistancies which can be classified as fake news. A dataset will be used to validate the implementation.
Neural Networks and Logic Rules for Semantic Compositionality (Bojan Bozic)
How can we combine Neural Networks with Logic Rule-based Systems built on Description Logic and which benefits could be gained by extending NN with DL? This could be a research report on current attempts to define rules in NN in order to reduce complexity and improve predictions or a proof of concept implementation to show how to use logic rule restrictions or a semantic rule-based language such as SHACL or ShEx in simple NNs.
Decentralised multi-agent deep reinforcement learning (DMARL) (Basel Magableh)
Context uncertainty in distributed self-adaptive systems requires a multi decentralised adaptation agents that could possibly adapt to changes in distributed systems. Learning in decentralised multi-agent environment is fundamentally more challenging than the employment of single agent. DMARL faces serious problem like having non-stationary state, high dimensionality of the observation space, multi- agent credit assignment, robustness, and scalability. This project investigates the possibility of employing DMARL in self-adaptive microservices architecture. Possibly this projects requires good understanding of Ray framework a scalable distributed reinforcement learning framework (https://ray.readthedocs.io/en/latest/rllib.html) or Intel Coach (https://github.com/NervanaSystems/coach).
Unsupervised realtime anomaly detection towards self-healing microservices architecture (Basel Magableh)
Because of the uniqueness of streaming data found in distributed systems, the design of self-healing microservices architecture should meets the following requirements:
(1) The system should be able to operate over real-time data (no look-ahead). No data engineering is possible as the data is collected at realtime.
(2) The algorithm must continuously monitor and learn about the behaviour of the microservices cluster.
(3) The algorithm must be implemented with an automatic unsupervised learning technique, so it can continuously learn new behaviour and anomalies in real-time.
(4) The algorithm must be able to adapt the changes of the operating environment and provides adaptation strategy that can be orchestrated over the cluster nodes.
(5) The algorithm should be able to detect anomalies as early as possible before the anomalous behaviour is interrupting the functionality of the running services in the cluster.
(6) The proposed model should minimises the false positives (False Alarms) rate and the false negatives rate. If the system identifies a normal behaviour as an attack, this attempt should be classified as a False Positives (False Alarm).
(7) The proposed model should offer a high detection rate, better accuracy and a lower false alarm rate.
(8) The proposed model should offer consistence adaptation strategy, and preserve the cluster state and it should offer the architecture with a roll back (auto recovery) strategy in case the adaptation action failed.
(9) One important aspect of a self-healing Microservices architecture is the ability to i) continuously monitor the operational environment, ii) detects and observes anomalous behaviour, and iii) provides a reasonable policy for self-scaling, self-healing, and self-tuning the computational resources to adapt a sudden changes in its operational environment dynamically at rune-time.
Deep Spiking Neural Networks (SNN) (Basel Magableh)
Spiking Neural Networks (SNN) are a rapidly emerging research of data analytic. SNN is inspired from the brain process of sequential memory. SNN might be able to handle complex temporal or spatial data, in dynamic environments at low power and with high effectiveness and noise tolerance. The success of deep learning comes with cost of using brute-force algorithms and power hungry GPUs, in addition to the issue of slow model training and the limitation of each model to specific domain of MDP environments. SNN could get benefits from the advances made in evolution and cognitive neuroscience to be employed in the domain of IOT and multi sensors networks. This project aims to investigate the possibility to implement SNN in a simulated IOT platform such as CPUCARBON http://www.cupcarbon.com.
Activity-awareness in mobile computing (Basel Magableh)
Activity-awareness in mobile computing has inspired novel approaches for creating smart personalised services/functionalities in mobile devices. The ultimate goal of such approaches is to enrich the user’s experience and enhance software functionality. One of the major challenges in integrating mobile operating systems with activity aware computing, is the difficulty of capturing and analysing users generated content from their personal handsets/ mobile devices, without outweigh their privacy and securing the collected sensitive data. Although conventional solutions exist for collecting and extracting textual contents generated by users in mobile computing applications, these solutions are most unsatisfying when it comes to personal integrity of the user. All previously known conventional solutions comprises collecting the user’s generated content from various applications such as e.g. an email client and/ or Short Message Service (SMS).Unfortunately, all of those applications are introduced to the user after exposing and sharing his/ her personal data to a web services located outside the mobile device, e.g. in the cloud. In addition, the collected information is stored outside the user’s personal mobile device in some remote server.
These serious drawbacks make many users reluctant to use the described conventional solutions. However, there is still a request for personalised pro-active services functionalities. Activity-aware computing enables mobile software to respond proactively and effectively to user needs based on the contextual information found in the environment where they operate. The ultimate goal of activity-aware computing is to automatically extend the application behaviour/structure based on the activity being performed by the user or software components. In this project, we are investigating a model for collecting user-generated content from the mobile’s OS messaging loop, feeds the collected context information into an experience matrix based on the sparse distributed model. The model offers the device a runtime representation of the current context model, which can be used to predict the user activity.
Toolkit to Support Undergraduate Co-Design Team Projects (John Gilligan)
Co-design has its roots in the Participatory Design techniques developed in Scandinavia in the 1970s. Co-design reflects a fundamental change in the traditional designer-client relationship. A key tenet of co-design is that users, as ‘experts’ of their own experience, become central to the design process. Co-design is a multifaceted process with multiple stakeholders. Co-Design requires support for training participants in Co-Design Methods. It requires support for project management across the project lifcycle. It requires support to manage the deliverables of these projects, for example code sharing.This project addresses the development of a Co-Design toolkit to provide these supports across the processes of Co-Design. What are useful components of this Toolkit. Can their effectivness in assisting the development process be measured? Do they help meeting the learning outcomes of Team projects on Undergraduate computer Science courses.
Designing an effective formative assessment program for teaching AGILE Software development (John Gilligan)
Teaching AGILE software development faces many challenges especially in the development of approriate exercises and activities around different aspects of the methodology. For example what is the best way to introduce the different roles involved, such as Scrum Master, Scrum Coach Product Developer. How can the user feedback be embedded in the process. This Project looks at the design of a suite of supporting exercises built around the iteartions of a specific App development to realise the learning outcomes of a course on Agile development..
Developing an effective Audit Methodology for testing Web Accessibility (John Gilligan)
The EU Web Accessibility Directive of 2016 requires public bodies of memebr states to ensure their websites and apps are accessible to persons with disabilities. All websites created after that date will have to be accessible by 23 September 2019. Existing websites will have to comply by 23 September 2020. All mobile applications will have to be accessible by 23 June 2021. The Web Content Accessibility Guidelines of the W3c. org have also recently been upgraded to version 2.1 with new checkpoints related to cognitive challenges amongst others. This project looks at developing effective auditing strategies to ensure compliance with these guidelines using both automatic tools and manual processes.
Establishing the inclusiveness and fairness of Big Data Sets for machine learning and other applications (John Gilligan)
The issue of applications which are based on Big Data excluding those on the edges of this data has become a rising topic. For example CV screening programs which use Machine Learning can discriminate against those with disabilities or on grounds of gender becuae the data used doesnt sufficiently represent those populations., August bodies such as the World Economic Forum have highlighted this. IBMs Fairness 360 toolkit is an example of an initiative which looks at this problem.This project examines ways in which populations are excluded for example through outlier removal in ML pre-processing. It looks to develop metrics for inclusion similar to those which have been established for data fairness and to devlop ways to calculate these measures. It looks to develop algorithms for greater inclusion for these applications.
Augmented Reality Tourist Information System (John Gilligan)
Is Augmented Reality a viable technology for builiding usable Tourist Information systems. Can it be used effectively to combine geolocation andmulti modal content presentation to enhance the tourist experience?
Project ideas by lecturers
Projects by Brendan Tierney can be found here. These projects are suitable for Data Science/Analytics and Software Development stream students. Some examples include: GPT-4 in the Classroom, tech for assisting with visual accessibility, machine learning, artificial intelligence, evaluation of different programming frameworks, DevOps, Linux, Cloud infrastructures and many others.
Projects by Damian Gordon can be found here http://damiansprojectideas.blogspot.com/search/label/MSc
Proposal INFO and Submission
Proposal submission requirement (admission)
Students can submit their proposal and, if accepted, proceed to the dissertation once they have successfully completed all core modules* (that is 8 modules). They can take their dissertation module with their option modules. While it is not recommended taking 2 option modules at the same time as the dissertation, they are permitted to do this.
Note that: if the dissertation proposal does not reach a viable standard and gain approval, the student can exit with a Postgraduate Diploma in Computing or can resubmit a new dissertation proposal within two semesters of completing all the modules.
(*): Students who started from TU256 programme and are following the 1-year path to complete the MSc, might be allowed to proceed to the dissertation with one core module pending (this can't be Research Design though). Please contact the coordinator before submitting your proposal to confirm approval to submit.
Proposal Submission procedure
The proposal needs to be submitted as a pdf file linked in this form. The filename of the pdf MUST be: StudentNumber_name_surname_MSc_Proposal.pdf.
Note: proposals that do not respect the template and the max number of words, for each section, are not taken into consideration for evaluation. All the required information is compulsory.
Proposal Submission Outcome
Approved (student can proceed with dissertation)
Subject to change (student needs to revise proposal and instructions will be communicated by email on how to do so)
Not Ready (student cannot proceed and is allowed to re-submit a new proposal at the next deadline)
What is NOT a proposal
If your project fits into one of the following scenarios, your dissertation proposal will NOT be approved
A design of an application
An implementation of an application or piece of software
A literature review
A literature review with some conclusions
A literature review with some proposed “something new”
A literature review with a survey
Proposal Template
Latex template
You can use the same latex template employed in the Research design and proposal writing module. Download it here.
Detailed formatting
Front-page
Student name and surname
Student number
Stream (eg. ASD, DA)
TD Dublin programme (eg. TU059 / TU060)
Number of modules left to take (or exemptions or modules pending results)
When to start the dissertation if proposal is approved (select a date from the deadline list)
Proposal submission attempt (eg. 1st, 2nd, 3rd)
Title of project (20 words)
List of people and companies associated with the project (including those you might need input from or data from) (20 words)
Sources of data needed for the project, if any (20 words)
The proposal
The research background + domain + scope (max 300 words).
Describe the context of application and provide the reader with some background and main notions/conceptsInformal description of the research problem (max 300 words)
Describe the problem you aim to tackle informallyLiterature review + state-of-the art approaches to solve the identified research problem + gaps (max 1000 words)
Identify and describe the relevant peer-reviewed articles you have read (15+) in the selected domain of research. Identify and describe the state-of-the-art approaches to solve the identified research problem.
Identify and describe the gaps of the state of the art approaches (these will inform the subsequent research question).Research question (max 70 words)
Hypothesis (max 300 words)
Informally describe the hypothesis of your research design. Formally define your alternate and null hypothesis.Research objectives and experimental activities (max 1000 words)
Define your objectives and the research activities for each of them. Provide precise details about how you are going to implement, in practice, each of the research activities (eg. programming languages, technologies employed, execution of surveys, baseline methods and approaches).
In some of the research activities, specify clearly all the details of dataset (eg. dependent/independent variables, scales and ranges, sample size).Evaluation of designed solution with statistical tests (max 300 words)
Describe carefully how you are going to evaluate the outcomes of your experiment statistically, considering the concept of significance, and how you are going to accept/reject your hypothesis
Describe how findings will be related to the research questionBibliography
References – using the APA7 style (max 1500 words)Activities
Timeframe for each research activity planned in section 7 (1 line per activity) (max 300 words)
Proposal Submission Checklist
This check list should be used by students who are submitting the dissertation proposals to the Dissertation Coordinator. It will assist students in completing the research proposal.
Do you have a clear Project Title?
Do you have a clearly defined research question?Do you know what you are going to do?
Do you know the scope of your proposed work?
Do you know how you are going to test it?
Have you structured your research question correctly?
Have you completed the reading on background topics using reputable sources?
Have you identified 15+ key references relating to your dissertation?
Have you cited all references using the APA style method?
Have to identified any previous related experiments and you have their results (and data sets)?
Do you have a clearly defined experiment/evaluation that supports your research question?
Can you complete the experiment/evaluation within the necessary timelines?
Can you measure the outcomes from your work?
Will you be able to evaluate the outcomes of your work and compare with results from previous research?
Do you have all the resources in place to conduct the research (human, technology, data, etc)?
Do you have written agreement from all the people/companies you need to have involved in the project?
Have the people/companies agreed to be available (provide the necessary support) during the lifetime of the project?
Do you have a realistic project plan?
Can the project be completed within the required timeline?
Is the MSc Dissertation Proposal Template completed in detail?
Can you explain your project to someone who is not familiar with your programme of study, in a way that they can understand what your project is about, why you are doing it, etc?
Are you ready to present and defend your project proposal?
First Meeting with Your supervisor
The following check list is to assist the MSc Dissertation student and their supervisor to prepare for their first meeting.
Student
Email your Supervisor your proposal
Create a detail Table of Contents, listing out the sections for each chapter.
Email this to your supervisor
Complete the first draft of your Chapter 1. This will be based on your project proposal.
Keep working on your project until you have your first meeting with your supervisor
Supervisor
Review the material forwarded by the student
Familiarise themselves with the requirements of a dissertation project
Review previous dissertations, that relate to the student’s dissertation
Topics to Discuss & Agree at First Meeting
Discuss the project
Discuss the progress since the start of the project
Discuss possible issues with the project, with the experiment, evaluation, etc
Discuss the use of a meeting log that the student should complete this in advance of each meeting.
Agree on the frequency and schedule of the project update meetings
Agree a communication plan
Responsibilities
Note that it is your responsibility to submit a coherent manuscript. Although your supervisor might be happy with your work, it is the final manuscript that will be examined by other lecturers, so you are the only one responsible of the final mark. Try to maximise the individual marks as outlined by the second criterion of the marking scheme.
Marking Scheme
First order criterion
Distinction 80+%
The practical work is of such a high professional standard that it could be distributed without significant extra effort.
The research makes a significant contribution to the chosen field and is worthy of publication.
Merit (first): 70-79%
Extremely strong internal consistency making the project a convincing whole which addresses the original research question. Evidence of originality. Impressive use of information gathered to support argument. Critical awareness of strengths and limitations
Good (2.1): 60-69%
Evidence of internal consistency which relates to original question. Very good use of information gathered to support argument. Awareness of strengths and limitations
Fair (2.2 ): 50-59%
Evidence of internal consistency which relates to original question but with some weaknesses in the integration of different sections. Use of information gathered but with some weaknesses in the integration of evidence. Some awareness of strengths and weaknesses
Pass (third): 40-49%
Limited evidence of internal consistency which relates to the original with significant weaknesses in the integration of different sections. Limited use of information gathered to sustain the argument with significant weaknesses in the integration of evidence. Limited discussion of strengths and weaknesses.
Fail: 0-39%
Lack of internal consistency. Very limited use of information gathered to sustain the argument with serious weaknesses in the integration of evidence. No awareness of limitation of the dissertation.
Second order criterion
Composition, organisation and expression; Use of language; Referencing (10%)
[1-2] Poorly organised, very unclear language with serious errors and poorly referenced.
[3-4] Adequately organised and expressed. Clear use of language but with significant errors/typos. Fair referencing but with some inconsistency.
[5-6] Generally well organised and expressed with clear use of language but with minor errors/typos. Competent referencing but with some inconsistency.
[7-8] Well organised and expressed, easy to follow (eg. through visual aids) and clear use of language. Very good referencing.
[9-10] Optimally organised and expressed. Excellent use of language with no errors at all. Fully and appropriately referenced. The dissertation can be easily turned into a publication.
Introduction and rationale; Formulation of research question/problem; Focus (10%)
[1-2] Incoherently formulated research question/problem. Inadequate rationale and no focus.
[3-4] Poorly formulated research question/problem. Lacks subject focus. Rationale poorly articulated and justified.
[5-6] Sufficiently formulated research question/problem with some evidence of subject focus. Sufficient rationale is provided.
[7-8] Competently formulated research question/problem, evidence of subject based focus and clear and well thought through rationale.
[9-10] Optimally formulated research question/problem with clear subject based focus and excellent, convincing rationale.
Literature review; Range of reading; Relation to research question; Independent research (20%)
[1-4] Over reliance on very restricted range of sources. Not related directly to research question/problem. Very little evidence of independent research for sources.
[5-8] Reliance on limited sources, lack of evaluation. Poorly related to research question/problem. Little evidence of independent research for sources.
[9-12] Appropriate reading with some limited evaluation. Not consistently clearly related to the research question. Some evidence of independent research for sources.
[13-16] Wide reading with critical evaluation and identification of gaps/issues in the literature. Clearly related to the research question/problem. Good evidence of independent research for sources.
[17-20] Extensive reading which has been thoroughly critically evaluated. Optimal understanding of the literature with excellent identification of gaps/issues and explicitly related to the research question. Very good evidence of independent research for sources. The literature review is worthy of a publication.
Design of solution; Critical awareness, analysis, use and evaluation of relevant theory; Rationale for research solution/approach; Information gathering and analysis; Awareness of strengths and limitations. (15%)
[1-3] Poorly designed solution with little awareness of theory. Ability to analyse, evaluate and apply relevant theory is not existent. Inappropriate or non-existent rationale presented for the research approach and the data collection methods used. Poor and inappropriate information gathering and analysis, not capable of being reworked. No awareness of strengths and limitations of proposed solution/approach.
[4-6] Weakly designed solution with some limited awareness of theory. Little evidence of ability to analyse, evaluate and apply relevant theory. Defensible rationale presented for research approach adopted and the data collection method used. Weak information gathering and analysis but sufficient information gathered to allow for a possible reworking of data. Little awareness of strengths and limitations of solution/approach taken.
[7-9] Generally clear awareness of theory. Good evidence of ability to analyse, evaluate and apply relevant theory. Fair rationale for research approach adopted and the data collection methods used. Competent information gathering and analysis. Some awareness of the strengths and limitations of the solution/approach taken.
[10-12] Clear and critical awareness of theory. Very good evidence of ability to analyse, evaluate and apply relevant theory. Clearly presented rationale for research approach adopted and the data collection methods used. Very competent and appropriate information gathering and analysis. Clear awareness of strengths and limitations of solution/approach taken.
[13-15] Extensive and critical awareness of theory. Convincing evidence of ability to analyse, evaluate and apply theory. Excellent rationale for research approach adopted and the data collection methods used. Extremely systematic and appropriate information gathering and analysis. Critical awareness of the strengths and limitations of the solution/approach taken. The design of the solution is innovative and worthy of publication.
Analysis & evaluation of deliverables; Awareness of strengths and limitation of findings (20%)
[1-4] Results are very limited with no discussion or critical analysis. Evaluation of deliverables is absent or unclear and incoherent. No awareness of strengths and limitations of findings.
[5-8] Results are provided but with a poor structure. Deliverables have been poorly and superficially evaluated with no awareness of strengths and limitations of findings;
[9-12] Results are presented and structured with an appropriate analysis. Deliverables are clear and sufficient. Strengths and limitations of findings are defined and sufficiently discussed.
[13-16] Results are clearly presented and structured, followed by an excellent analysis. Strengths and limitations of findings are defined and critically discussed.
[17-20] Results are optimally structured and articulated. Deliverable are very clearly and critically evaluated. Strengths and limitations of findings are also discussed and convincing, making the analysis and evaluation worthy of a publication.
Conclusion; Contribution and Impact; Future work and recommendations (10%)
[1-2] Conclusion is merely a summary of thesis. Little or no commentary on the impact or limitations of the findings. Future work and recommendations are absent.
[3-4] Conclusion has a sufficient summary of the dissertation. There is some appreciation of impact, significance and/or limitations but weak and difficult to grasp. Research question is addressed by not fully answered. Future work is poorly defined with little or no recommendations.
[5-6] Conclusion is fair with a good overview and summary of the thesis. Some synthesis and impact of findings but not fully convincing. Research question is addressed and satisfactorily answered. Future work and recommendations have been identified but not fully convincing.
[7-8] Conclusion is very good, with a good synthesis of the work and with a clear discussion of impact. Research question is addressed and satisfactorily answered. Future work and recommendations have been identified and are fair.
[9-10] Conclusion is excellent and optimal, providing a clear synthesis of the work and a fully convincing discussion of impact. Research question is addressed and satisfactorily answered. Future work and recommendations are also extremely convincing, well structured and defined, clearly highlighting how the project can be extended and enhanced.
Complexity, Originality, significance, applicability, dissemination (5%)
[0-1] – not complex, little or no originality or significance, and no demonstrated application.
[2] – some complexity, originality, and/or significance, some demonstrated application, if applicable.
[3] – commendable level of complexity, originality, and/or significance, and a reasonable degree of demonstrated application, if applicable.
[4] – very complex or significant originality, and/or significance, and good appropriate demonstrated application, if applicable.
[5] – exceptional complexity, originality, and/or significance, and outstanding demonstrated application, if applicable.
Verbal presentation and defense (10%)
[0-2] Poor verbal presentation with no discourse and no critical thinking. Not able to defend any raised point.
[3-4] Weak verbal presentation with some discourse and no critical thinking. Not always able to defend raised points.
[5-6] Sufficient verbal presentation with some discourse and some critical thinking. Sufficiently able to defend raised points.
[7-8] Good verbal presentation with fair discourse and fair critical thinking. Almost always able to defend any raised point.
[9-10] Excellent verbal presentation with deep discourse and deep critical thinking. Well able to defend any raised point.
Dissertation structure, TEMPLATES, AND checklists
The dissertation completed will be a substantial piece of written work in the region of 60/70 pages (approx 20,000 words) of core chapters with a maximum page limit of 120 pages overall. It is important to note that the length of your dissertation will depend on the topic and material that you are including and remember quality is far more important than quantity.
Templates (word and latex)
Overleaf template (recommended)
Structure
Initial structure
Declaration (page numbering starts here using Roman Letters)
Abstract (Max 750 words)
Acknowledgements
Table of Contents
Table of Figures
Table of Tables (Roman Letter page numbering ends here)
Chapter 1 – Introduction
This should be a short account of why you undertook the investigation, what the general state of knowledge was at the time you started. Why you asked the questions that your research/observations were expected to answer. It should state your research question and briefly introduce the research undertaken. A brief reader’s guide to the dissertation should be included.
1.1 Background
1.2 Research Project/problem (define the research question here)
1.3 Research Objectives
1.4 Research Methodologies
1.5 Scope and Limitations
1.6 Document Outline
Hints
Introduce the context of your research work.
Introduce and describe the problem being addressed (problem statement).
Describe the importance of the problem and its relevance to the underlying field of study.
Briefly describe the novel proposed solution to the problem and the purpose of the study.
State the research question.
Outline the structure of the dissertations.
Write it in the present tense.
Checklist
Include citations from high quality, credible and relevant academic sources.
Give a clear description of the study’s rationale.
Establish the significance of the current work.
Scope and objectives clearly stated.
Is there a logical flow on the subject to be covered and its importance and relevance.
Chapters 2 – Literature review and related work
It is essential that this should be a critical review in which the various papers are compared and in which you express your own opinion of the conclusions that may be drawn and to do your best to reconcile discrepant results in favour of one or other set. Provide a summary at the end of the sections or of the whole review. Remember that the content of this chapter must be relevant to the actual research carried out; it is not a “brain dump” of everything you have read. You must demonstrate analysis and synthesis of the literature. Also in some cases it may be necessary to divide the state-of-the-art into two separate chapters; one covering the application domain, and the other the technologies, or one describing the background/context of your research and one on the state-of-art for your specific issue.
Hints
Start with a few sentences describing the general domain.
Present a preview of research areas particularly relevant to your problem and that you will discuss in depth.
Create a body with different paragraphs, each discussing a different relevant thread of research.
A thread of research devoted to the synthesis of works using a different method to solve the same problem.
A thread of research devoted to the synthesis of works that use the same proposal method to solve a different problem.
A thread of research devoted to the synthesis of related problems that cover the domain of your problem.
A thread of research devoted to the synthesis of a similar method applied to solve a similar problem.
End the section with a paragraph that summaries reviewed work, emphasise the gaps justifying the research problem and the need of your solution
Write it in the past tense.
Checklist
Does the dissertation define the existing knowledge gap?
Is the research question in line with the literature gap?
Does it have a discussion section, summarising and commenting on the pros and cons of the reviewed papers?
Chapter 3 – Design and methodology
The general structure of the study should be described clearly. The comparisons that are going to be made, the controls and technical details etc. should be included if appropriate. This chapter might report on the design of a software-based solution or an experiment using existing datasets). Also the chapter includes the methodology/ies adopted for designing the solution and for evaluating it (eg. errors, performance measures, accuracy, ROC curves, t-tests, correlations etc.). This chapter should include your ethical considerations. You should try to demonstrate as much as possible your commitment to conducting research in an ethical and responsible manner. If there are no critical issues identified, this should also be highlighted at this section. Please refer to the Ethical Considerations section for more information.
Hints
Carefully describe how the research was conducted.
Provide readers with technical background information or explanation needed to understand the results.
Explicitly state the assumptions of your research.
Present the research hypothesis/es.
Include enough details to allow replicability and verifiability.
Ideally, include a visual diagram (non-textual elements, figures, charts, photos, maps, tables) summarising the components of your design and how they interact with each other.
Describe the materials, tools and equipment used in the research.
Explain how the samples have been gathered.
Describe any randomization techniques as well as how the samples were prepared.
Explain how the measurements were made and what computations has been executed upon the raw data.
Describe the statistical techniques used upon the data.
Describe evaluation metrics and rationale for their adoption.
Write it in the past tense.
Checklist
Provide a full and clear description of the chosen study design.
Clear and well-structured description of data collection methods and/or datasets used.
There is enough information for the work to be reproduced by an independent researcher.
Visual diagram summarising the components of your design and how they interact with each other.
Chapter 4 – Results, evaluation and discussion
Depending on the nature of the project, this chapter will describe the actual work carried out e.g. any experiment undertaken or system implementation following the theoretical description of the design of chapter 3 (design). This is the most important section of your dissertation. Also it should focus on the presentation and discussion of the findings in the light of what is already supposed to be known, from the literature conducted in chapter 2. It should show how findings confirm or refute the research hypothesis and how they differ with previous work in the literature. Do not use this section for another review of the literature.
Hints
Restate the research problem to get back the focus of the readers.
The results of a study do not prove anything, they can only confirm or reject the research hypothesis/es.
Explain what your results mean and mention how they relate to your research question.
Avoid presenting and discussing data that is not critical to answering the research question.
Present a result and then explain it, before presenting the next result.
Be as concise and factual as possible in reporting findings.
Do not present the same data or repeat the same information more than once.
Do not ignore negative results rather greatly discuss them.
End with a synopsis of results followed by an explanation of key findings.
Explain the meaning of the findings and their importance to the research field.
Relate findings to similar studies.
Consider alternative explanations of the findings.
Acknowledge the limitation of the study and the findings.
Results are written in the past tense, while the discussion should be in the present tense.
Checklist
Results are credible and supported by valid and intelligible statistics.
Limitations are clearly stated.
Findings of the study, despite its limitations, are clearly stated.
Provide a comprehensive and well-supported criticism of the impact and relevance of the study findings.
The section combine text, tables and figures to present data and highlight major findings.
Chapter 5. Conclusion
This should be a short account of the results of your work, emphasising mainly what is new. There should be a close correlation between this chapter and chapter 1, in which you described the problem you were addressing. It is advisable to deal with the limitations of your research at this stage and to suggest here what further work might be done. This is the appropriate place to do a self-assessment of your research.
5.1 Research Overview
5.2 Problem Definition
5.3 Design/Experimentation, Evaluation & Results
5.4 Contributions and impact
5.5 Future Work & recommendations
Hints
Summarize the literature review in a couple of sentences discussing the gaps and limitations.
Restate the main research problem in one sentence backed up by gaps in the literature.
Summarise in a couple of sentences the solution designed to the research problem.
Highlight key findings, their impact and significance to the body of knowledge.
Describe important/unexpected implications applied to practice.
Introduces possible new future work or expanded ways of thinking about the research problem.
Don't introduce any new ideas.
Write it in the present tense.
Checklist
Summarize the dissertation's main arguments and conclusions.
Give a final judgment on the significance of the study findings.
Include future work.
References (using [APA6 Referencing style])
References should be consistently cited in the text. The references in the Reference List at the back of the dissertation should be listed in alphabetical order. The should also be complete so that the reader wanting to locate a particular reference has all the information necessary to do so (including page numbers, volume, issue).
Appendixes
These should contain supplementary material that is not necessary in order for the reader to follow the argument. For example, the text of a questionnaire, detailed UML diagrams, or a complete Software Requirement Specification should be places in an Appendix. It is not considered necessary to include code, but you may do so by including a link within your dissertation PDF.
Reminders
Reminders
In the introduction of each chapter cover the purpose of the chapter and give an overview of what it covers (expect for the introduction chapter)
At the end of each chapter summarise the main gist of the chapter itself and link it to the next chapter
Have a logical presentation of research, developments, findings and lead the reader in an integrated way to what you want to achieve
You can only claim to have an opinion if it is based on original research or an in-depth analysis, for instance, has been made of theory where different authors’ viewpoints are contrasted (commonalities and differences), and you make deductions based on that.
Formatting style (compulsory)
Make sure all Tables and Figures are numbered and centered with a caption and they should be numbered consecutively within a chapter e.g. Figure 1.2, Table 1.2.
style for Chapter headings – Times New Roman 14, Upper Case, Bold, Numbered.
style for chapter sub-headings – Times New Roman 13, Mixed Case, Bold, Numbered.
style for sub-sub headings within a chapter – Times New Roman 12, Mixed Case, Numbered.
Paper Size: A4 Paper.
Font size for text: Times New Roman Size 12.
Line Spacing: 1.5.
All text have to be left & right justified.
Left Margin should be 3.2cm, Right Margin should be 3cm, Top & Bottom Margins should be 2.5cm.
Page numbering should be centred
APA6 Referencing to be used [APA6 Referencing style].
References to web sites should be included as footnotes and not included in the References/Bibliography (They are not scientific contributions).
tables and figures cannot span 2 pages.
chapters start in a new page.
each figure and table has to have a caption.
pages cannot be half empty because of a large figure or table cannot be placed in it. Place the large figure or table in a new page and refer it with a label within the text (eg. Figure x.y in page x depicts…).
each figure and table must be within left and right margins and must be fully readable.
Preparing the dissertation
Preparing the dissertation
Make sure that you correctly number all pages of the dissertation.
Make sure that the table of contents and other tables reflect the new and updated page number.
Make sure all tables and diagrams are correctly labelled (and referenced if need be).
Where appropriate, get a native English speaker to proof read and identify all English spelling and grammar mistakes.
Make sure you correct all spelling and grammar mistakes in your dissertation.
Make sure that your references are complete and full details given using the APA referencing style.
Avoid having any blank spaces on pages where you have a diagram on the next page.
Discuss the final version of your dissertation with your supervisor.
Complete all changes that your supervisor has recommended.
Sign the declaration page, to state that your dissertation is your own work.
Ensure that you comply with the plagiarism policy.
After you have completed the first draft
Give yourself a day or two before rereading and revising your text. This is the only way to catch any mistakes after working too long in the same piece of work.
Don't use "I" statements or make sweeping generalizations. Stay objective, and be specific.
Does each body paragraph begin with a topic sentence that introduces your reason for that paragraph?
Did every body paragraph include a concluding statement? If succeeded by another paragraph, the concluding sentence should function as a hook and transition.
Avoid unnecessary jargon and define terms where needed.
Use appropriate transitions to show the connections between my ideas.
Bad transitions We have discussed the topic of code-switching in Swahili from what might be termed the point of view of the mechanics of code-switching; i.e. how it operates in Swahili; let us now examine its function in Swahili linguistic culture, i.e. why Swahili speakers choose to code-switch, and when. We will then show how code-switching is used in popular media, print advertising, and other genres.
Good transitions No discussion of print-medium advertising in Swahili would be complete without a discussion of another, related phenomenon, which is the use of language(s) and varieties in comic books. Particularly instructive are Swahili renditions of Tarzan comics, which depict Tarzan as fluent in Swahili and English, while other characters are depicted only as speaking English inadequately.
Text is logically organized using paragraphs.
Consistently use UK English (avoid mix of US and UK English).
Make sure each section is using the correct tense.
Take care of colons, semi-colomns, dots, commas and punctuation in general.
Avoid colloquialisms and contractions.
Remove repetitions within sentences and across sentences and use synonyms.
Make sure you have defined an acronym before using it.
Make sure technical terms are explained and do not assume readers know everything.
Split long sentences, into shorter ones; organise sentences into paragraphs and avoid orphan sentences.
Split long paragraphs into shorter ones.
Check each bibliographic entry (against incomplete authors, page numbers etc.).
Add visual diagrams/schemas where is possible to clarify textual information.
Make the caption of each table and figure self-explanatory (ideally a reader should understand the content of each table and figure without looking back at the text.
Connect each sentence to the previous one as a flow.
Be precise with terms such as approach, technique, method, methodology, framework, measure, measurement, model: they all mean different things and can have different meanings depending on the context.
Choose a tense and stick to that (do not use present and sometimes past tenses);do not start a sentence with a citation (e.g. [40] said...).
Use a mix of active and passive sentences.
Use proofreading tools like Grammarly, Google Docs, or Microsoft Word to eliminate typos and grammatical mistakes. Excessive errors may lead to rejection. Grammarly can be integrated with overleaf.
Utilize figures to convey complex information visually; prioritize readability; make sure the text is easy to read. Follow the adage "Make them big enough so they can be read easily". If you are displaying images together, use the same scale for the X and Y axes. If you are trying to emphasise a small part of the image, consider reducing the image to that part. No point in using more space than needed. Also, make sure to include both axes in the figures. Use colours to show the important parts you want to emphasize.
For values between 0 and 1, include up to 4 decimal digits (e.g.: 0.8767); for a scale from 1 to 100, include 2 decimal digits (98.87%).
Single quotation marks are like this ` and ' . And double quotation marks are like this `` and '' .
Refrain from hyperbolic language, such as claims of being the sole dissertation addressing an issue or the best in its field.
Use bullet points. This is a very useful resource for describing different points in your dissertation (but careful not to abuse their use!). Rather than listing them one after the other using sentences or paragraphs. You can use them to show your contributions, the results of the experiments, the goals of the research, etc.
Between section, subsections, and subsubsections, do not leave an empty space, there should be a short description of what the reader can find in the next subsections.
Employ large language models like Chat GPT or Gemini for refining text, identifying typos, enhancing style, and rectifying awkward phrasing. However, avoid direct copying to prevent plagiarism, or using language too flowery and verbose.
Avoid complicated or redundant language. The simpler the better. Complicating things makes texts wordy and harder to read. For example, instead of “in order to” it is best to use “to”. Or use "can" rather than "are able to". Better "In 2015" than "In the year 2015". Better "In computer science" than "In the area of computer science".
Ethical considerations
If you plan to collect data as part of your MSc dissertation research, you need to contact the coordinator early. There are time constraints in getting ethical approval which may not be be feasible within your dissertation timeline. However, if you think data collection is an important part of your work, please contact the coordinator as soon as possible to evaluate the feasibility of your proposal.
If you are planning to use existing datasets you will likely not need to apply for ethical approval, but you should take the same critical evaluation of others people data collection and management process. For example, was the data ethically sourced? Are the records anonymised or pseudo-anonymised? Was informed consent obtained? Etc. This should be addressed in your dissertation in the design chapter in a similar way you identify possible limitations in your project.
Examples os Dissertations
Top dissertations in Data Analytics
Detection of Pathological HFO Using Supervised Machine Learning and iEEG Data
An Exploration of the Relationship between the Partisan-Business Cycle and Economic Inequality within Developed Economies
Top dissertations in Advanced Software Development
Other examples in the Library Catalogue since 2024 (*)
Data Analytics dissertations
Advanced Software Development dissertations
Other examples in Arrow (previous repository) up to 2023 (*)
General Computer Science dissertations
(*) Only dissertations that achieved a first or upper second class honours standard (i.e. =>60) are published on TU Dublin repositories.
dissertation for evaluation and Submission
Submission of dissertation is a pdf of the text and plagiarism report file emailed to the Dissertation Coordinator (lucas.rizzo@tudublin.ie)
Submitting your electronic copy of dissertation (PDF)
Submit an electronic copy of dissertation (pdf) by email – Note: just pdf is accepted (.doc and other formats are not taken into account).
The filename of the pdf MUST be: StudentNumber_name_surname_MSc_thesisForEvaluation.pdf.
All dissertation material should be submitted to the Dissertation Coordinator before the deadline by email.
Plagiarism report
Plagiarism report
You will need to submit a plagiarism report with your dissertation. In order to create a plagiarism report you need to enrol in the following Brightspace module:
Module Name: Research Proj and Dissertation SPEC9999: 2023-24
Module Code: SPEC999924055TU060-2324
Under Assessment - Assignment you will find a submission box entitled "MSc Dissertation submit for plagiarism report”. You should upload your final pdf here. You can print the plagiarism score as a pdf and send it to the coordinator with your final dissertation.
Presentation of dissertation
About the presentation
Your presentation will be for 15 minutes only so make sure you do not have too many slides. It will be followed by 10 minutes of questions.
Be ready to present up 10 minutes early just in case we are running ahead of schedule.
There is no need to be nervous. You have done the work.
Your presentation should focus on what work you did, what you discovered, what you learned and your recommendations. Do not focus on your literature review – those at the presentation will have read your dissertation.
Make sure you have a pdf version of your presentation (in case of problems)
The presentation schedule and location of the room (if in person) or an online link (if online) will be confirmed and emailed to you a couple of days before the schedule date of the presentations.
After your presentation you will be emailed with details of what changes you need to make to your dissertation
Presentation Structure
Recommended structure of presentation (7/10 slides). Note: you have 15 mins for your presentation
Literature review & background (max 1 slide)
Gaps/motivation and research problem/question (1 or 2 slides)
Design and research methodologies (1 or 2 slides)
Implementation and experiments (1 or 2 slides)
Results and discussion (1 or 2 slides)
Contribution and impact (max 1 slide)
Future work and recommendations (max 1 slide)
Publishing the final dissertation
After presentation, the final submission is a pdf emailed to the Dissertation Coordinator (lucas.rizzo@tudublin.ie)
Publishing your final dissertation
If your dissertation is at a first or upper second class honours standard (i.e. =>60) it will be published (by the MSc coordinator) and made available in TU Dublin’s library repository. If you do not want your dissertation to be publicly available, please inform the Dissertation Coordinator.
Checklist
The following check list is to assist the MSc Dissertation student to complete their final version of the dissertation. You will be contacted by your supervisor after the presentation about what changes, if any, are needed to your dissertation. You will only have a short time period to make these changes.
Make sure that your dissertation incorporates all the structure and formatting guidelines.
Make sure that you correctly number all pages of the dissertation.
Make sure that the table of contents and other tables reflect the new and updated page number.
Make sure all tables and diagrams are correctly labelled (and referenced if it is needed).
Where appropriate, get a native English speaker to proof read and identify/correct all English spelling and grammar mistakes.
Make sure that your references are complete and full details given using the suggested referencing style.
Avoid having any blank spaces on pages where you have a diagram on the next page.
Complete all recommended changes, as agreed with your supervisor, if any.
The Spine of the dissertation should contain
Student Name.
Year.
MSc. in Computer Science (”specialism”). Replace ”specialism” with your MSc specialism (e.g Data Analytics, Advanced Software Development).
The front cover of your dissertation should have
Title of Dissertation (centred and Title Case).
Student Name (centred and Title Case).
MSc. in Computer Science (”specialism” (left justified, Title Case) and Year (right justified)).
You must sign and date the Declaration page of each copy of your hard-bound dissertation.
Note that: Hard-bound copies of your dissertation are no longer being stored by TU Dublin. So you are only required to print a hard bound copy of your final dissertation to your supervisors, if they request it.
Submitting Your Final Dissertation (PDF)
Submit the final PDF of your dissertation and all additional material to the Dissertation Coordinator before the deadline.
The filename of the pdf MUST be: StudentNumber_name_surname_MSc_FinalThesis.pdf
Copyright, deferrals, failure and fees
Copyright
In accordance with the TU Dublin IP Policy, TU Dublin recognises that students who create IP own the products of their intellectual efforts. TU Dublin will not use any of the content in your dissertation.
Deferral
Applications for deferrals of the Dissertation module can be made by following the Deferral procedures available at here.
Requests for deferrals should be accompanied by supporting documentation, e.g. medical certificates or other appropriate documentation which give appropriate reasons for the deferral request. Except in exceptional circumstances, all students who defer the Dissertation are expected to take a new Dissertation topic and submit a new Dissertation proposal.
Part time students who defer their dissertation within 4 weeks of the start date of the semester in which they are completing the deferral, can apply to have their fee carried over to the following semester. Except in exceptional circumstances, part time students who defer outside of this period will be expected to pay to take the Dissertation module again.
In place of deferral, students with certified medical circumstances can apply for an extension of the Dissertation deadline for the length of time of the certified illness. Students can apply for an extension through their supervisor and informing the coordinator.
The time frame to complete the MSc in Computing part-time is 6 years. A student should submit a proposal for the dissertation module within one year of completing the other modules. If a student does not do this, s/he is expected to exit with the Postgraduate Diploma (PgDip).
Failure
If you fail the dissertation you need to make a formal written application, to be sent to the coordinator, specifying that you wish to repeat the dissertation module.
Additionally, if you are approved to repeat you will need to pay the appropriate fees (see this page). You will be also expected to submit a new proposal on a new topic and if approved start a new project. A new supervisor will be assigned to you.
Repeats of dissertations are handled according to the TU Dublin regulations as outlined in the General Assessment Regulations
Fees
Once you have received a formal email indicating the approval of dissertation proposal, and a supervisor has been assigned, you will be registered for the Dissertation module.
If you are a full time student the fees for the dissertation module are covered in the fee you paid for the MSc programme.
If you are a part-time student, you will be registered for the Dissertation module and you need to pay the related fees for that module.
For further information about fees, please refer to the TU Dublin Fees and Grants information.