The FAMOUS project addresses the rise of fake activities in Europe, aiming to disrupt the fake accounts market ecosystem that fuels disinformation. Our focus is on critical issues involving fake accounts that imitate human behaviour, such as influencer marketing fraud, phishing scams, and counterfeit product sales. The goal is to develop specialised tools for social media platforms, regulators, researchers, marketing agencies, and the public to track such activities and draw meaningful inferences. This €400k project is funded by European Media and Information Fund. We are collaborating with Luxembourg's IRiSC Research Group.
We are developing Poli, an advanced tool that employs AI to analyse and simplify extensive bodies of text. By presenting information in a visual navigation format, it enhances accessibility and comprehension, thereby empowering users. Poli revolutionises the accessibility of complex financial and legal documents, making them clear and understandable for everyone. Key benefits include enhanced clarity, support for vulnerable consumers, and financial empowerment. This start-up project has received over £80k investment from Newcastle University, Northern Accelerator, and UKFin+.
The online world is a curious but uncertain world. It enriches many facets of life but at the same time exposescitizens to a variety of threats that may cause harm to them, their loved ones and to wider society. Many of these harmsresult from a complex interaction of societal processes driven by diverse stakeholders-we call these Complex Harms. In this project, we look to provide a profound understanding of the role of online agency in protecting citizens and wil deliver collaborative methods, technological building blocksandscientifically grounded best practices for our society to provide more proactive and structured approaches to protecting citizens online. This collaborative project involves partners from Birmingham, Durham, Surrey, and Royal Holloway. For more information, please visit https://agencyresearch.net and EP/W032481/1
The pervasive presence of AI agents in socially complex settings demands a holistic understanding of human-AI interaction (HAII) and its impact on users. Beyond mere functionality, AI agent design encompasses operational dynamics and user perceptions. This project explores the sociocultural dimension of HAII, adopting an "AI-in-the-human-loop" approach that challenges the oversimplified notion of algorithmic performance. We aim to uncover the conceptual and ethical considerations underlying HAII design and evaluation, fostering theoretical advancement, empirical research, and methodological innovation. Our goal is to construct a robust conceptual framework guiding empirical studies and ensuring socially aligned AI systems. This project strives to enhance our comprehension of HAII and promote responsible, socially conscious AI design.
The aging population poses a growing challenge for healthcare systems worldwide. Many older adults prefer to remain in their homes, but this often requires support from domiciliary/home care services. Early detection of changes in daily activity and behaviour is vital to identify when an older adult might need more support or specific intervention. Current methods of monitoring older adults at home are often limited and rely on subjective assessments or reactive interventions. Low-cost sensors and smart plugs can provide valuable data on daily activities, but these insights are limited without knowledge of the person's usual state or underlying conditions. This project aims to develop a system that combines sensor data, digital care records, and care worker insights to provide a comprehensive picture of an older adult's well-being.
Conventional user authentication methods, such as passwords and one-time passwords (OTPs), are vulnerable to compromise through theft, interception, or manipulation. While behavioural analysis techniques offer enhanced security by scrutinising user interactions and patterns, they often rely on predefined rules and static thresholds, limiting their adaptability to evolving user behaviour and emerging threats. To address these shortcomings, this research project proposes a novel user authentication system that employs reinforcement learning-driven behavioral analysis to achieve greater security and adaptability.
One challenge with Natural Language Processing (NLP) models is their limited ability to process large texts, due to their architectural and computational constraints. To overcome this, we introduce ARElight, a system specifically designed to efficiently handle and analyse large document sequences. ARElight works by breaking down these extensive texts into segments centred around pairs of objects mentioned in the text. This segmentation enables the construction of graphs that represent the narratives within the input text. These graphs allow for various analytical operations. For instance, we can explore questions like, "How do the narratives of these texts compare?" or "What common elements do they share?". This approach is particularly effective for analysing and comparing social media accounts, observing shifts in narratives over time, and examining extensive texts such as books, offering a novel way to process and understand large-scale textual data.
The EULA (End User License Agreement) is often a lengthy document that users commonly overlook, despite containing crucial details about data collection practices by companies. This includes information about what data is collected from users, the purposes of this collection, and how the data is shared and transferred. Notably, some companies deliberately make these documents complex to dissuade thorough reading. In our project, we aim to tackle this issue by developing an AI-powered visualisation tool. This tool is designed to automatically extract key information from user agreements and present it in a format that is easy for people to understand. The goal is to enable users to quickly grasp the essentials of what, how, and why companies collect their data, without the need to sift through long and complicated texts. This approach empowers users to make more informed decisions about their data privacy.
This research focuses on examining the impact of censorship on social media, specifically by analysing comments on VKontakte during the early stages of the Russian-Ukrainian conflict. Utilising text-based classification methods, we identify the allegiance of comment authors in the conflict and track how these narratives evolved over time under the influence of censorship. The methodology developed and the insights gained from this study are valuable for future research in this area, offering a deeper understanding of the dynamics of information warfare in today's digital era.
The recommendation algorithms of popular video-based social networks like TikTok or Instagram Reels are designed in a way that can cause users to lose track of time, often leading to hours spent viewing videos. This study explores methods to address this issue through UX disruptions. These disruptions are subtly designed to interrupt video viewing and help users regain control of their time and agency. Additionally, the study investigates the impact of such interventions on the app's machine agency, specifically examining how they affect AI recommendations and the behaviour of recommendation algorithms.
Analyzing a massive VK dataset, we investigated how AI-generated photos impact malicious bots in disinformation and fraud. Comparing GAN-photo bots to regular ones across metrics like price, survival, and trust, we surprisingly found them less sophisticated and dangerous. Cheaper, less popular, and faster, they also succumbed more readily to detection, suggesting the "cyber arms race" in GAN detection may already be leveling the playing field. (preprint, data)
Though affordable smart homes promise convenience and well-being, they create hidden harm. Current research, heavily tilted towards technology, ignores users' concerns about privacy, security, and control. To fill this gap, we analyzed 235 publications (2017-2022) across diverse fields. Finding a bias towards information security and lacking user-centric studies, we urge future research to focus on how users experience, interact, and navigate the complexities of smart homes. (paper)
The UK's March 2023 AI white paper proposes a "pro-innovation" regulatory framework built on five key principles: safety, transparency, fairness, accountability, and redress. This flexible, sector-specific approach, overseen by a central coordinator, aims to fuel AI leadership while fostering trust, investment, and growth. Our multidisciplinary team offers comments and recommendations to ensure ethical and responsible AI development, safeguarding citizens from complex online harms. (paper)
We propose a novel semi-supervised paraphrase generation approach using deep latent variable models. The proposed method combines an unsupervised latent sequence inference model (VSAR) with a supervised dual directional learning model (DDL) to address the cold-start problem. The combined model achieves competitive performance against state-of-the-art supervised baselines, especially when labeled data is limited.
In this project, we proposed an automated method to generate valid ICD-11 codes from medical text. The method utilizes vector representations, cosine similarity, and bidirectional matching to achieve up to 0.91 F1 score, surpassing existing tools. This work facilitates efficient ICD-11 coding for healthcare informatics.
Medical triage robots powered by NLP improve healthcare, but topic assignment is crucial. We introduce KC-LLDA, a novel method excelling in medical texts by combining domain knowledge with topic modeling. Further, a hybrid approach merging supervised and unsupervised learning boosts KC-LLDA and BERT, enhancing contextual understanding and classification accuracy. Improved topic assignment leads to better robots and ultimately, better patient care.
In this project, we conducted experiments to extract the image features of design images and the reviewers’ eyetracking data, aiming to predict the product design ranking in the competition through fusion data analysis.
In this project, we introduced three sequence-to-sequence models with a novel frame loss mechanism to forecast future visual behaviour sequences from historical data. Experiments demonstrated that our models outperformed baseline methods and yield promising results, validating the feasibility and effectiveness of visual behaviour sequence prediction in VR museums.
The growing popularity of VR exhibitions necessitates the development of effective 3D user interfaces (UIs) for head-mounted displays (HMDs). This projects conducted two workshops that delve into UI design paradigms and ergonomic arrangements to enhance user experience and immersion in VR exhibitions. The aim is to provide designers and developers with practical guidelines, saving them time and effort.
In this project, we conducted experiments to extract the image features of design images and the reviewers’ eyetracking data, aiming to predict the product design ranking in the competition through fusion data analysis.
We introduce PICA-PICA, a smart customizable environment that uniquely combines programming and engineering of artworks. Real-world testing demonstrated PICA-PICA's effectiveness in terms of integration, motivation, and improved learning outcomes. It also presents an innovative STEAM implementation using Micro:bits technology to create sustainable artworks using recyclable materials. Parental involvement further highlights the study's unique aspects.
This study explores the connection between UI user preference and deep image features, aiming to predict user preference levels using deep convolutional neural networks (DCNNs) trained on a UI design image dataset. Our proposed EfficientNet achieved optimal results demonstrating the proposed method's potential for learning UI design aesthetics patterns. Based on the prediction model, a mobile application called 'HotUI' was developed for UI design recommendations.
In this project, we propose Sim-GAIL (student modelling based on Generative Adversarial Imitation Learning) and evaluate its performance against two traditional student modeling methods using action distribution, cumulative reward, and offline-policy evaluation.
In this project, we designed a virtual reality (VR) environment for learning American sign language. A 2D and a 3D version of the VR game were developed and compared through various user studies.
In this paper, we propose MLFBK: A Multi-Features with Latent Relations BERT Knowledge Tracing model, which is a novel BERT based Knowledge Tracing approach that utilises multiple features and mines latent relations between features to improve the performance of the Knowledge Tracing model.
In this project, we propose a key idea extractor powered by pre-trained language models, overcoming limitations of traditional topic modeling and keyword extraction methods. Our approach clusters key ideas without relying on verbatim wording, providing comprehensive coverage of student feedback.
In this project, we use Twitter to analyse emotions in detail. We propose EmoBERT, a new emotion-based variant of the BERT transformer model, able to learn emotion representations and outperform the state-of-the-art. We provide a fine-grained analysis of the pandemic's effect in a major location, London, comparing specific emotions (annoyed, anxious, empathetic, sad) before and during the epidemic.
In this paper, we simulate the interactions between these factors to offer education stakeholders – administrators and teachers, in a first instance – the possibility of understanding how their activities and the way they manage the classroom can impact on students’ academic achievement and result in different learning outcomes.
In this project, deep convolutional neural networks were applied to evaluate the aesthetic preferences of GUIs, based on a large dataset of user interface design images with the ground-truth annotations. The experimental result indicated the feasibility of the proposed method. We aim to build a large aesthetic image database, and to explore a practical and objective evaluation model of GUI aesthetics.
In this project, we analysed a course offering both public and private types of communication to its participants. we propose new measures for sentiment analysis to better express the nature of change and speed of change of the sentiment in the two channels used by our learners during their learning process.
In this paper, we expanded upon a previously proposed taxonomy of game elements in gamified learning environments. We present a detailed and extended taxonomy, organized into five dimensions related to the learner and the learning environment. This comprehensive taxonomy serves as a valuable tool for designing and evaluating gamified learning experiences.
In this project, we proposed a multifaceted open social learner modeling (OSLM) design by introducing novel personalized social interaction features into Topolor, a social personalized adaptive e-learning environment. An experimental study evaluates the impact of these features, providing insights for future research and improvements.
Co-design is often used to incorporate stakeholder feedback, but staff time constraints can hinder their involvement. Digital crowdsourcing offers a solution by enabling staff to contribute in small increments. To explore this, we implemented a digital crowdsourcing co-design prototype and assessed participants' acceptance, providing valuable insights for further development.
This project explored the implementation and perceived acceptance of learner profiles in Topolor, a social personalized adaptive e-learning environment (SPAEE). A case study confirms high learner acceptance of profile-related features, and the analysis suggests directions for further research and improvement.
In this project, we proposed a web-based system leveraging machine learning to predict student performance, incorporating XAI algorithms SHAP and modified counterfactual explanation to elucidate model predictions. Visualisation techniques effectively present these explanations, enabling user interaction. The system's design employs the Question-driven design process, a formalized XAI UI/UX design method, demonstrating its efficacy in building human-AI trust.
In this project, we use computer vision techniques to track and examine the trajectories of weightlifters in order to help them improve their form and technique. Result indicates our approach is sufficient to be able to help alleviate the amount of contact time required by every athlete, due to being able to more effectively view their mistakes and what points require improvement.
We propose a novel semantic-level attention mechanism integrated into the bi-directional GRU-CNN architecture. This fine-grained attention mechanism effectively focuses on relevant semantic features, leading to a target-conditioned tweet representation. Evaluated on a benchmark Stance Detection dataset, the proposed model significantly outperforms state-of-the-art token-level attention and SVM baselines.
In this project, we investigate the generalisation capabilities of RNN sequence-to-sequence and Transformer models. Through comprehensive analyses, we identify the limitations of these models and demonstrate the Transformer's superior robustness to out-of-distribution data and dataset bias. Future work should explore methods to further enhance the Transformer's generalization ability.
In this project, we propose and evaluate different granularity visualisations for learning patterns derived from clickstream data, comparing the patterns of course completers and non-completers. Across a variety of domains, our fine-grained, fish-eye visualisation revealed that non-completers tend to jump forward in their learning sessions, often exhibiting a "catch-up" pattern, while completers demonstrate more linear behavior.
In this project, we propose NC-HGAT, a novel approach that augments a state-of-the-art self-supervised heterogeneous graph neural network model with neighbor contrastive learning. NC-HGAT effectively leverages graph structure information using multilayer perceptrons, maintaining robust performance amidst corrupted connections. Extensive experiments on benchmark datasets demonstrate NC-HGAT's superior performance over state-of-the-art methods.
The ubiquitous presence of AI agents in socially intricate environments demands a comprehensive understanding of human-AI interaction (Haii) and its impact on human users. In Haii, AI design extends beyond functionality, encompassing operational dynamics and human perceptions. This project explores the sociocultural aspects of Haii, adopting a nuanced "AI-in-the-human-loop" approach. By challenging simplistic views of algorithmic performance, we aim to uncover the conceptual and ethical considerations underlying Haii design and evaluation. This endeavor is crucial for propelling theoretical, empirical, and methodological advancements. Our goal is to develop a robust conceptual framework guiding empirical studies and fostering socially responsible AI systems. Ultimately, this project strives to deepen our understanding of Haii and promote responsible AI design.
Within the dynamic landscape of education, Intelligent Tutoring Systems (ITSs) have emerged as catalysts for transformation, offering personalised instruction that adapts to individual learning needs. Despite their potential to revolutionise education, traditional ITS authoring methods remain hindered by challenges such as being time-consuming, resource-intensive, and inaccessible to non-programmers. This research proposes a ground-breaking approach harnessing the power of simulated students, Large Language Models (LLMs), and human-AI interaction techniques to democratise ITS creation, elevate the learning experience, and foster exceptional student outcomes. This research aims to address the above challenges by exploring the following questions: RQ1. To what extent can simulated student based ITS authoring, empowered by LLMs, significantly reduce the time and expertise required to develop high-quality, personalised ITSs? RQ2. What strategies can be effectively implemented to integrate LLMs into the ITS authoring process, enhancing the functionality, adaptability, and effectiveness of the resulting ITSs? RQ3. What innovative human-AI interaction techniques can be adopted to optimise the user experience, accessibility, and efficiency of simulated student based ITS authoring?
The escalating reliance on AI raises concerns about reliability and trust. Issues such as mistakes, bias, and potential malicious intent underscore the need for methods to ensure the safety, reliability, and accountability of AI systems. Symbolic AI, or knowledge representation and reasoning, presents a promising approach. It offers a clear language for representing, analysing, and reasoning about system behaviour. This clarity enables the verification of models, ensuring safety and correctness. Moreover, symbolic AI provides human-understandable explanations for decisions, fostering trust in AI systems. This PhD research explores the potential of symbolic AI techniques to enhance the safety and trustworthiness of AI systems.
AI is poised to transform the very fabric of our work, interactions, and self-perception. Artificial Social Intelligence (ASI), an emerging field of study, seeks to imbue machines with the ability to comprehend, engage, and behave in a socially intelligent and human-like manner. This PhD project aims to unravel the psychological and societal ramifications of our interactions with chatbots, virtual assistants, and AI-powered avatars. The findings will not only deepen our understanding of AI's societal impact but also provide invaluable guidance for the responsible development and deployment of these social AI systems. Furthermore, the research will contribute to the formulation of ethical frameworks and regulations within the AI industry.
AI is rapidly reshaping our world, touching upon various facets such as healthcare, transportation, and education. With the increasing integration of AI into our daily lives, it is crucial to understand the nuances of human-AI interaction and foster responsible AI development aligned with human values and ethical principles. This PhD study addresses these vital issues by systematically exploring the dynamics of human-AI interaction and examining strategies for promoting responsible AI development and implementation. The study will focus on three key objectives: 1) dissecting human-AI interaction patterns; 2) identifying factors influencing responsible AI development; and 3) constructing frameworks for responsible AI implementation.
How can a small group of humans and machines collaborate in decision making? What mechanisms and frameworks are needed for humans and machines to coordinate their activities and efficiently make decisions? Is it possible to scale up these methods thus supporting massive activities? In this project, we will address these questions by exploring new forms of human-machine interaction and collaboration.
The ultimate goal of this research is to explore and implement emotional intervention mechanisms for ensuring affective and cognitive quality for distance learning that are gamified using game design strategies and game elements, by analysing and mapping relationships between key factors affecting students’ emotions and learning effectiveness, and adaptively generating and delivering personalised learning materials that can intervene students’ emotional state thus ultimately improving learning outcomes.
Below are potential dissertation projects for Newcastle UG and MSc students.
AI is a powerful and complex technology that can make decisions that affect humans, but it can also be opaque and untrustworthy, especially when it makes errors or unexpected outcomes. This project aims to create an interactive system that enables human users to evaluate and compare different types of AI explanations. Explainable AI (XAI) is a field that aims to make AI more transparent and trustworthy by providing ways to explain its decisions to humans. However, measuring the quality and usefulness of AI explanations is a challenging and open problem that requires empirical and theoretical research. The project will first conduct user research to understand the needs and expectations of users regarding XAI, and use the findings to inform the design and implementation of an interactive system that offers various kinds of AI explanations, such as text, image, or chat. The project will then test the system’s usability, effectiveness, and impact on user trust and satisfaction, using both quantitative and qualitative methods. The project will also deploy the system with real users, such as AI experts, researchers, or end-users, and collect data on their experience, feedback, and trust, and use the data to improve the system.
Cybersecurity is a vital skill for the digital age, but it can be hard and boring to learn, especially for students. This project aims to design and develop a game that teaches cybersecurity skills to students in a fun and interactive way, using data science methods to create realistic scenarios and challenges, provide personalised feedback and guidance, display clear and interactive visualisations, and evaluate the learning outcomes. The game can be web-based or mobile-based, depending on the preference and skills of the project team. The project will also implement a learning analytics component that monitors and analyses the learning process and outcomes of the players, using data collection, processing, and reporting tools and techniques. This component will help the project team understand how the players learn, what difficulties they face, what strategies they use, and how they progress and achieve the learning goals. The project team will also use the learning analytics component to provide feedback and recommendations to the players, as well as to improve the game design and content based on the data analysis. The project will help the project team learn about cybersecurity, demonstrate their data science skills, and develop their portfolio as a data scientist and a game developer. The project will also have a chance to try the game with real users and get feedback and insights from them.
Digital health systems, such as electronic records, wearables, and apps, are widely used in healthcare and research, but they generate a lot of data that is complex, diverse, and high-dimensional, and that needs effective methods to process, model, and show the data and insights in a clear and interactive way. This project aims to create a visual analytics system that uses data science and visualisation to help healthcare professionals and researchers analyse and understand the data from digital health systems. The project will first identify the data sources and types for digital health systems, such as patient outcomes, behaviour, feedback, and safety. The project will then develop data science methods to process and model the data, such as clustering, classification, regression, or anomaly detection. The project will then design visualisation methods to display the data and insights in a clear and interactive way, such as charts, graphs, maps, or dashboards. The project will then evaluate the usability, effectiveness, and impact of the system on healthcare decisions and research, using various methods and tools, such as user testing, surveys, interviews, or analytics. The project will also test the system with real users, such as healthcare professionals, researchers, or patients, and collect data on their experience, feedback, and decisions.
Data science and machine learning are essential skills for the modern world, but they can be challenging and intimidating to learn, especially for undergraduates. This project aims to create a web or mobile app that teaches data science and machine learning skills to undergraduates using gamification and open student modelling. Gamification adds game elements, such as points, badges, levels, challenges, and feedback, to increase motivation and engagement. Open student modelling creates and displays representations of the student’s knowledge, skills, goals, and progress, and uses them to provide personalised and adaptive learning experiences. The project will apply these methods to design and implement a gamified app that fosters the students’ intrinsic motivation and self-determination based on the self-determination theory. The project will also use data science methods to create realistic data and results, display interactive visualisations, and evaluate the learning outcomes. The project will collect and analyse data on the user experience, feedback, and learning outcomes, and use the data to improve the gamified app. The project will help the students learn and demonstrate data science and machine learning skills, and develop their portfolio as a data scientist and a gamified app developer. The project will also try the app with real users and get feedback and insights from them.
Generative AI is a branch of artificial intelligence that can create novel and diverse content, such as text, images, audio, and video, using techniques such as natural language generation, image synthesis, audio and video manipulation, etc. However, generative AI also poses ethical and social challenges, such as how to align it with human values, how to assimilate human intents, and how to augment human abilities. This project aims to explore how generative AI can align with human values, assimilate human intents, and augment human abilities by designing and evaluating novel interfaces and interactions that leverage generative AI techniques. The project will also investigate the ethical and social implications of human-centred generative AI systems and propose guidelines and best practices for their responsible development and use. The project will conduct user testing to measure and improve user performance and experience with the system, using various methods and tools. For instance, the project can create a system that can generate personalised and diverse stories based on user preferences and feedback, and evaluate how it affects user engagement, enjoyment, and creativity, as well as how it respects user privacy, consent, and agency, and how it handles potential biases, errors, and harms.
Human-GenAI collaboration is a novel and emerging form of interaction where humans and generative AI systems work together to achieve common goals. However, this collaboration requires both parties to have metacognitive abilities, such as awareness, understanding, and control of their own and each other’s thought processes. This project aims to study how humans and GenAI systems interact and collaborate with each other by measuring and improving their metacognition. The project will also design and implement new paradigms and scenarios for human-GenAI collaboration, such as co-creation, feedback, guidance, and personalisation, and evaluate their outcomes and impacts, such as trust, satisfaction, creativity, and productivity. The project will conduct user testing to measure and improve user performance and experience with the system, using various methods and tools. For instance, the project can create a system that can assist users in writing code by generating and explaining suggestions, and evaluate how it affects user confidence, learning, and performance, as well as how it fosters trust, transparency, and accountability, and how it handles potential conflicts, misunderstandings, and failures.
In our increasingly interconnected world, personal data is continually collected, analysed, and utilised by various online platforms. However, many users remain unaware of the extent to which their information is harvested and how it impacts their digital experiences. This project aims to address this critical gap by creating an interactive tool that educates users about their data, its utilisation, and provides practical guidance on managing privacy settings. The envisioned tool will offer a Data Awareness Dashboard, categorising data types and providing visualisations. Users can explore specific data categories, assess privacy risks, and adjust settings across platforms. Educational resources will empower users, while usability testing and long-term impact monitoring will ensure effectiveness. By fostering a more privacy-conscious online community, this project bridges the gap between data collection practices and user agency.
Sustainability is a global challenge that requires individual and collective action. However, many people lack the motivation, awareness, or skills to adopt and maintain sustainable habits in their everyday life. This project aims to develop and evaluate a gamified mobile app that utilises behavioural science principles and game mechanics to encourage individuals to engage in more environmentally friendly behaviours. The app will allow users to set personal goals, track their progress, receive feedback, earn rewards, and compete or cooperate with others. The app will also provide educational content, tips, and nudges to help users overcome barriers and learn more about the impact of their actions. The project will use a user-centred design approach to involve potential users in the design, development, and testing of the app. The project will also conduct a user study to evaluate the usability, effectiveness, and user experience of the app, as well as its influence on users’ attitudes, knowledge, and behaviour towards sustainability.
Communication is a fundamental human skill, but it can be hindered by cultural differences, such as language, values, norms, and expectations. This project aims to develop and evaluate a conversational AI that facilitates effective cross-cultural communication by understanding and adapting to different cultural contexts. The conversational AI will use natural language processing and machine learning techniques to analyse the cultural dimensions and preferences of the users, such as individualism vs collectivism, power distance, uncertainty avoidance, etc. The conversational AI will also use natural language generation and dialogue management techniques to generate appropriate and respectful responses, such as greetings, compliments, apologies, requests, etc., that match the cultural context of the users. The project will use a user-centred design approach to involve potential users in the design, development, and testing of the conversational AI. The project will also conduct a user study to evaluate the usability, effectiveness, and user experience of the conversational AI, as well as its impact on users’ intercultural competence, awareness, and sensitivity.
Anxiety is a common and debilitating mental health condition that affects millions of people worldwide. Cognitive Behavioral Therapy (CBT) is a proven and effective psychological treatment that helps people cope with anxiety by changing their thoughts and behaviors. However, CBT can be costly, time-consuming, and inaccessible for many people who need it. This project aims to design and evaluate a gamified mobile app that gamifies CBT techniques for personalised anxiety management. The gamified app will use game elements, such as goals, challenges, feedback, rewards, and social features, to motivate and engage users in learning and applying CBT skills, such as cognitive restructuring, exposure, relaxation, and problem-solving. The gamified app will also use personalisation and adaptation techniques, such as user modelling, preference elicitation, and recommendation, to tailor the CBT content and experience to the individual needs and preferences of each user. The project will use a user-centred design approach to involve potential users in the design, development, and testing of the gamified app. The project will also conduct a user study to evaluate the usability, effectiveness, and user experience of the gamified app, as well as its impact on users’ anxiety levels, symptoms, and coping strategies.
AI is a powerful and pervasive technology that has applications in various domains and contexts, such as health care, banking, retail, or media. However, the use of AI also poses ethical challenges and dilemmas, such as privacy, bias, transparency, accountability, and human dignity. This project aims to develop and evaluate an educational tool that raises awareness about the ethics of AI and empowers users to make informed decisions about its use. The project will first select a specific domain or context where AI is applied and research the ethical issues and trade-offs that emerge from the use of AI in that domain. The project will then design and implement a prototype of an educational tool that informs and engages the target audience about the ethics of AI in the chosen domain. The tool could take various forms, such as a game, a simulation, a chatbot, or a website. The project will then conduct a user study to evaluate the usability, effectiveness, and user experience of the tool, as well as its influence on users’ attitudes, knowledge, and behaviour towards the ethics of AI. The project will contribute to the field of HCI by exploring how to create engaging and meaningful educational interventions for the ethics of AI.
If your project is related to Artificial Intelligence, Machine Learning, Behavioural Analytics, Recommender Systems, User Modelling, Personalisation, Cognitive Computing, Affective Computing, Gamification, Serious Games, Participatory Design, Intelligent Tutoring Systems, etc., please let us know.