Thursday | 27 March 2025 | Reg No- 06
Bangla
   
Bangla | Thursday | 27 March 2025 | Epaper
BREAKING: Ishraque Hossain made DSCC mayor      No holidays for police during Eid: Home secy      Prof Yunus meets China Executive Vice Premier      Asian countries must chart roadmap for shared future: Yunus       GK Shamim jailed for 5.5yrs in 2.97 billion graft case      Trump extends felicitation to Yunus, pledges to strengthen mutual relations      Last day of bank transaction      

Trustworthy AI should be built for sustainable economic growth

Published : Tuesday, 18 February, 2025 at 12:00 AM  Count : 671
Artificial Intelligence (AI) is revolutionizing industries globally, yet its deployment in low-income countries presents unique challenges. These nations often rely on importing AI solutions from high-income nations, which, while promising, carries inherent risks.

One major hurdle is the prevalence of "black-box" AI systems, where algorithms and decision-making processes remain opaque. This lack of transparency hinders assessments of fairness, accuracy, and reliability, especially when considering local contexts, socio-economic conditions, and cultural nuances.

Furthermore, imported AI systems may not align with local ethical and legal standards. This can lead to biased outcomes, privacy breaches, and exploitation of vulnerable populations. For instance, these systems might fail to meet local data privacy laws or ethical considerations specific to healthcare, agriculture, or education.

Moreover, limited knowledge and skills within local developer communities hinder the responsible application and development of AI. This restricts their ability to maximize service efficiency, increase productivity, and reduce costs. For example, the UK aims to achieve a 30% increase in farm productivity and a 20-40% reduction in costs through responsible AI-powered precision farming, optimizing inputs like fertilizers, water, and energy use.

To mitigate these risks (see Figure 1), low-income countries must prioritize the development of trustworthy, ethical, transparent, and explainable AI systems that comply with both international and local standards. These systems should empower communities without compromising fairness or security. This article explores a path forward for building such AI systems and leveraging them for sustainable economic growth in low-income countries.

Challenges Faced by Low-Income Countries in Implementing Trustworthy AI: Low-income countries face several significant challenges when adopting AI. These challenges stem from a combination of limited local expertise, infrastructure, and policy frameworks necessary to implement AI solutions responsibly and ethically.

Lack of Research and Development (R&D) Capacity: One of the most significant barriers to AI adoption in low-income countries is the lack of local research and development (R&D) capacity. These countries often rely on foreign suppliers for AI systems, many of which are designed and developed without considering the specific needs of the local context. As a result, low-income countries often lack the tools, skills, and infrastructure to evaluate and test the imported systems effectively. Without local R&D expertise, there is limited ability to assess these AI systems' safety, ethical standards, and performance. This reliance on foreign suppliers increases the risk of deploying AI systems not adequately aligned with local needs or regulatory standards.

Opaque, Black-Box AI Systems: Another critical issue is the prevalence of opaque, black-box AI systems. These systems are not transparent, meaning the data sources, algorithms, and decision-making processes are hidden from users. In sectors like healthcare or finance, where AI decisions directly impact people's lives, the lack of transparency can lead to biased outcomes and poor performance. When AI systems are not explainable, troubleshooting becomes impossible, and users cannot understand why a decision was made, leading to mistrust in the system's fairness.
Lack of Evaluation for Best Performance: Low-income countries also struggle with the lack of frameworks to evaluate the performance of AI systems. Many of the AI systems imported from high-income countries are designed for different socio-economic conditions and data sets, making it difficult to assess whether they best fit local needs. Additionally, there are no standardized methods or tools to evaluate the performance of AI systems in uncertain or high-stakes situations. This lack of evaluation mechanisms makes identifying potential failures or issues with AI systems is challenging. It leads to a higher risk of unintended consequences, particularly in critical sectors such as healthcare and agriculture.

Interoperability Issues: Imported AI systems may also face interoperability issues with existing infrastructure. Many AI systems are not designed to work seamlessly with local infrastructure or the current technologies, which can lead to integration challenges. These interoperability issues can result in inefficiencies and increased operational costs, especially in the healthcare, agriculture, and transportation sectors, where the effective use of AI relies on smooth integration with other systems.

“One major hurdle is the prevalence of "black-box" AI systems, where algorithms and decision-making processes remain opaque”

Cybersecurity and Data Privacy Concerns: Adopting AI systems raises significant cybersecurity and data privacy concerns. Many imported AI solutions rely on data that may not comply with local data protection laws. This increases the risk of data breaches, misuse of sensitive information, and cybersecurity vulnerabilities. In low-income countries with limited capacity to monitor and enforce cybersecurity regulations, using AI systems that are not compliant with necessary security standards can expose citizens to significant risks. This can trigger incidents like the $91 million hacking of the Bangladesh Bank system.

Absence of Safety, Ethical, and Legal Policies: Lastly, low-income countries often lack clear policies for AI safety, ethics, and legal compliance. Regulatory frameworks for AI are either underdeveloped or completely absent, especially in critical sectors like healthcare, agriculture, and finance. This lack of regulatory oversight creates vulnerabilities in AI use, including privacy, fairness, transparency, and accountability issues.

The Importance of Trustworthy AI: Why It Matters: Trustworthy AI is essential to fostering trust, reducing risks, and ensuring that AI is used responsibly. As AI systems become more integrated into industries and daily life, they must be designed to be transparent, explainable, and ethical.

Ethical, Transparent, and Explainable AI: AI systems must be explainable so that users can understand how decisions are made. Explainable AI ensures that stakeholders can scrutinize the decision-making process and detect potential biases. This transparency fosters trust and confidence in AI systems, ensuring they are used in ways that align with ethical and legal standards.

Accountability and Bias Mitigation: Explainable AI also helps mitigate the risks of algorithmic bias. AI systems can inherit biases from the data they are trained on, which can lead to discriminatory outcomes. For example, biased hiring algorithms can favour certain genders or ethnicities, leading to unfair practices. By making AI decision-making transparent, developers can identify and address these biases, ensuring that AI systems are fair, accountable, and aligned with human rights standards.

Introducing RaiDOT: A Solution to Overcome Some of These Challenges: RaiDOT (Responsible AI, Insightful Data, and Operational Trust) is an innovative AI tool that helps low-income countries evaluate, assess, and mitigate the risks associated with AI systems. RaiDOT.com is designed to address the challenges faced by countries like Bangladesh, which struggle with local capacity for AI governance and evaluation.

Key Features of RaiDOT:

l Risk Assessment:RaiDOT provides a comprehensive framework for evaluating AI systems across technical, ethical, and legal dimensions, ensuring that they are compliant with both international and local laws.
l Explainability: RaiDOT ensures that AI systems are explainable, providing transparency into their decision-making processes and enabling stakeholders to understand and trust the outcomes.
l AI Competency and Compliance Guidelines: RaiDOT helps AI competency and ensure that AI systems comply with local data privacy laws and industry-specific regulations, such as those governing healthcare and agriculture.

RaiDOT's Role in Capacity Building: In addition to risk management, RaiDOT is a tool for building local AI competencies. It offers personalised learning modules that are tailored to the specific needs of users in various sectors. These modules cover key aspects of AI ethics, governance, and risk management, enabling local professionals to deploy AI systems responsibly and effectively.

A Collaborative Future for AI in Low-Income Countries: To fully realize AI's benefits in low-income countries, these nations must move beyond simply importing AI solutions from high-income countries. RaiDOT provides a comprehensive tool for evaluating, assessing, and mitigating AI risks while ensuring ethical, legal, and transparent deployment. Collaboration among local governments, international organizations, and AI experts will be crucial to overcoming the challenges of AI adoption. By adopting trustworthy AI frameworks and building local capacity, low-income countries can harness AI's full potential safely, ethically, and inclusively.

The writer is CEO, D-Ready, UK and former Prof of AI & Director ARITI (Cambridge, UK), Chairman CSE (DU, BD) & Vice President, CamTech University (Cambodia)



LATEST NEWS
MOST READ
Also read
Editor : Iqbal Sobhan Chowdhury
Published by the Editor on behalf of the Observer Ltd. from Globe Printers, 24/A, New Eskaton Road, Ramna, Dhaka.
Editorial, News and Commercial Offices : Aziz Bhaban (2nd floor), 93, Motijheel C/A, Dhaka-1000.
Phone: PABX- 41053001-06; Online: 41053014; Advertisement: 41053012.
E-mail: [email protected], news©dailyobserverbd.com, advertisement©dailyobserverbd.com, For Online Edition: mailobserverbd©gmail.com
🔝
close