back to top
Thursday, June 27, 2024
8.6 C
Sydney

Building ethical AI: Why AI education needs a moral compass

Most read

Introducing ChatGPT. Photo: Supplied.

For the past several years, Artificial Intelligence (AI) has emerged as a leading technology adopted by various industries in both public and private sectors. There is an increasing use of AI systems in finance, healthcare, human resources, manufacturing, automation, transport, smart cities, security, customer care, and critical infrastructures, among many others.  

It is predicted that digital assistants powered by generative AI will likely become the de facto standard in customer care.  

Besides so many benefits of AI, there are ethical concerns that can arise with the adoption of AI. Some of these concerns include bias and discrimination, privacy, accountability, job displacement, safety and reliability, and socioeconomic impact.  

- Advertisement -

AI involves large sets of data created by humans, which can have human biases, leading AI systems to discriminate against certain groups of people.  

The collection of a lot of personal data raises concerns about how that data is used and protected.  

Sometimes it is difficult to understand how an AI system arrived at a decision, which makes it hard to hold anyone accountable if the decision is wrong.  

AI systems are causing a lot of automation and it is likely to lead to job losses in some sectors.  

Critical sectors such as healthcare, finance, and transportation require robust verification and cyber breaches of AI systems used in these sectors can have very serious consequences.  

Similarly, AI has the potential to disrupt labour markets leading to concerns about job displacement and economic inequalities. 

The abovementioned ethical concerns require careful consideration and proactive measures to balance the potential benefits of AI with the need to mitigate its potential risks.  

Tanveer Zia is Professor and Head of Computer Science at the University of Notre Dame, Australia. Photo: Supplied.

These considerations should be embedded at the very centre of education that involves teaching and learning of AI and related technologies. This requires a multi-stakeholder approach involving collaboration between governments, academia, industry, and the public.

The University of Notre Dame has recently introduced programs that include AI such as the Bachelor of Computer Science with AI major and Bachelor of Arts with AI major.  

While ethics is already embedded in the core of the Notre Dame curriculum, additional care has been taken to address ethical concerns associated with the AI systems in the course learning outcomes throughout the syllabus that include AI majors and specialisation.  

Some of these learning outcomes ensure that Notre Dame graduates are equipped with: 

  • Knowledge and understanding of the fundamental concepts and techniques of artificial intelligence, while considering ethical implications and professional responsibilities in the development and deployment of AI systems; 
  • Concepts of machine learning, including ethical considerations and professional best practices in model development; 
  • Knowledge of machine learning models for a given problem, considering ethical data selection and model evaluation, and drawing on relevant research findings; 
  • Advanced understanding of deep learning algorithms, considering ethical implications and responsible AI use in research and applications; 
  • Understanding of deep learning techniques to real-world problems, demonstrating ethical considerations and professional practices in data handling and model deployment; 
  • Effective communication and presentation skills through the explanation of deep learning models and the results of experiments, emphasising ethical considerations in AI communication and the responsible use of AI technologies; 
  • Key ethical and legal considerations about the collection, processing, and utilisation of big data, while adhering to regulations and ethical guidelines. 

The University of Notre Dame’s programs aim not only to prepare future software engineers, cybersecurity specialists, data analysts, and machine learning engineers but also to instil in them a deep understanding of the ethical obligations associated with developing and implementing AI systems that uphold human well-being and societal values.  

These objectives resonate with the university’s ethos, emphasising small class settings, direct engagement with academics, and pastoral care for its students in line with its mission and values. 

- Advertisement -
- Advertisement -