Artificial intelligence (AI) is poised to profoundly remodel everything we recognize, from societal structures to industry paradigms, guaranteeing transformation across all sectors, from healthcare to finance. The key will be to harness this powerful new tool without exacerbating inequalities.
Can AI’s potential as a driving force for positive impact outweigh the risks it also poses to achieving equality? The answer to this question will likely rely in large part on how well we, as a society, can guide and develop its application in ethical ways.
The current surge in Generative AI (Gen-AI), a form of machine learning that generates novel human-like content, represents the next wave of AI’s evolution. Its impact is already palpable; within 2 months of its launch, ChatGPT amassed a user base exceeding 100 million, indicating a significant shift in the role of AI in content creation and conversational interfaces.
The advent of Gen-AI as a disruptive force is fundamentally altering the economics and creativity of many areas, especially content generation. It is estimated that nearly half of companies across Asia and the Pacific have begun investigating the potential integration of Gen-AI into their business practices in the hope of reducing costs and incubating innovative business models. Preliminary studies suggest that Gen-AI’s potential to augment productivity could contribute trillions of dollars to the global economy, and this is merely the dawn of a truly transformative era.
While AI’s ubiquity continues to grow, there is an inherent need for democratized access to these transformative technologies. Policy makers, industry leaders, and civil society must engage in a multi-stakeholder dialogue to ensure that the socioeconomic dividends of AI do not exacerbate existing disparities but rather net contribute to an equitable landscape for all.
Ethical and social implications of AI: Navigating the risks
AI technologies hold immense promise, but they have equally powerful risks. AI can perpetuate social biases or accelerate the spread of misinformation, such as through deepfake videos.
Gen-AI’s capacity to generate human-like content raises security concerns, including the risk of misuse for social media manipulation. Given the rapid adoption and potential impact of Gen-AI, everyone, especially governments and the technology community, must address these challenges proactively.
Perhaps one of the biggest concerns for those pursuing inclusive growth strategies is the risk of AI creating greater economic exclusion, particularly among the poor or disadvantaged. Automation driven by AI has the potential to lead to job displacement, especially for jobs that involve routine or repetitive tasks (e.g., manufacturing, customer service, or transportation). This could potentially create greater economic inequality, resulting in workers facing unemployment or underemployment. Ultimately, those who lack the necessary capabilities to work alongside AI may find themselves disadvantaged. This could be particularly problematic for developing countries where a greater portion of the population experiences limited education, literacy, digital skills, or access to technology.
As a part of mitigating such risks, policy makers might consider approaches to ensure the ethical application of this technology. What exactly constitutes “ethical” or “responsible” AI? “Ethical AI” refers to the conscious effort to align AI development and deployment with ethical principles, such as fairness, accountability, transparency, and safety. Some guidelines are already in place, but the absence of a universally accepted standard makes the question complex, maybe unanswerable.
Digital technology does not respect borders, and cultural perspectives on what is ethical vary widely. Given these challenges, international dialogue is not just beneficial—it is essential for shaping responsible AI practices that are globally applicable.
Can AI drive inclusive growth and sustainability?
While navigating and addressing the potential risks to reduce its inadvertent disadvantage for certain groups, AI can enable a transformative pathway toward achieving the United Nations’ Sustainable Development Goals (SDGs). Projects like AI for Earth have demonstrated AI’s potential for conservation and sustainability, directly aligning with multiple SDGs. Moreover, AI can empower decisions in critical areas, such as healthcare, education, and environmental conservation, by harnessing massive unstructured datasets like social media posts or satellite images. The data to drive these initiatives are being developed now through collaborations like AI4D.
Financial sector
AI can revolutionize access to financial services and play a role in accelerating financial inclusion. For instance, machine learning models can assess credit risk for individuals without a financial history, thereby enabling the unbanked or underbanked to access a broader array of formal financial products and services. Companies like Ant Group in the People’s Republic of China use AI to offer microloans, significantly broadening financial access. Another example is fintech company M-KOPA, which uses AI and non-traditional data to drive credit models for 3 million users.
Food security
AI can provide plot-level crop insights to smallholder farmers, incorporating datasets covering weather and soil conditions. This would be transformative to Asia and the Pacific’s 450 million smallholder farmers and significantly address food insecurity while also improving crop decisions that could potentially increase yields by 20%–30%.
Health
AI can revolutionize healthcare delivery, from diagnostics to treatment personalization. For example, IBM Watson Health has partnered with healthcare organizations to improve patient care through data-driven insights. AI could also identify which at-risk populations need the most help in real time. In addition, AI could be used to speed up access to education for health workers in low-income communities.
Gender equity
AI can help level the playing field for women in the workplace. Automated résumé screening tools with built-in anti-bias algorithms can assist HR departments in making equitable hiring decisions. Organizations like She Loves Data offer training in data skills to women, preparing them for digital-age roles. Women could also be empowered with more “gig economy” skills (and income) to populate needed data sets in less spoken languages (like in India’s Kannada). This, in turn, allows AI-driven apps to be created to help low-income speakers in developing countries, which are being supported by organizations like The Gates Foundation.
Clean and renewable energy
AI technologies can optimize renewable energy production. Google’s DeepMind used machine learning to reduce the energy required to cool its data centers by 40%, demonstrating a scalable model for energy efficiency. In a developing economy example, AI supports solar mini-grids in Africa and is showing promise in Southeast Asia.
A call to action for responsible AI leadership
AI’s promise for inclusive development is vast. Its benefits can be accelerated by actively identifying “good but difficult problems”—social issues that are widely impactful yet historically hard to solve.
Realizing this potential requires a concerted effort and ethical stewardship involving leaders, policy makers, and technologists. Responsible AI leadership should focus on harnessing AI for inclusive growth, equitable development, and sustainability within Asia and the Pacific and beyond. Some key examples of responsible AI actions are:
- Identify impactful challenges and apply policy interventions that promote AI-enabled solutions
Countries like India and Indonesia grapple with healthcare accessibility, especially in remote areas. An important challenge could be deploying AI-enabled telemedicine solutions to diagnose common conditions, thereby extending the reach of healthcare services to marginalized communities.
- Develop ethical use cases
AI can help identify human trafficking victims by analyzing data from diverse sources like social media, immigration records, and CCTV footage. The ethical concern for policy makers would be ensuring citizens’ data privacy and safety.
- Foster multi-stakeholder collaboration
Singapore’s Smart Nation initiative serves as a case study that involves government, academic institutions, the private sector, and civil society in shaping technology policies and implementing smart solutions, including responsible AI.
- Leverage international frameworks
Projects aiming to achieve zero hunger (SDG 2) could apply AI algorithms to predict crop yields, optimizing agricultural outputs. The Organisation for Economic Co-operation and Development’s AI Principles could guide such projects, emphasizing transparency, fairness, and robustness in AI systems.
- Promote data literacy and public awareness
Public campaigns in countries like Australia and New Zealand have been implemented to educate people on how AI is used daily, from Netflix recommendations to spam filters, thereby fostering a better understanding and accountability towards how data is used or misused.
- Promote transparency
Companies like Microsoft and Google have released AI Ethics reports on issues like carbon footprints of AI models, bias in AI systems, and economic impacts, setting an industry standard for transparency. Policy makers can work with the private sector to develop reporting standards promoting responsible practices.
- Commit to long-term investment for sustainable impact
Leading AI nations, such as the United States, the People’s Republic of China, and the United Kingdom, have committed long-term investment in AI research and development, aimed not only at economic gains but also at solving complex problems like urban pollution control and efficient energy utilization, demonstrating a sustained commitment to both innovation and sustainability.
This article was originally published by the Griffith Asia Institute as part of a series of articles on inclusive growth and development in the Griffith Asia-Pacific Strategic Outlook 2024.
Comments are closed.