How Western AI Models Risk Jeopardizing Egypt's Financial Future
Charlotte Long | Avery Cotton
The rapid adoption of artificial intelligence is reshaping financial systems across the globe, and Egypt is no exception. Under its Vision 2030 agenda for sustainable, inclusive growth, the country is rapidly adopting AI-driven technologies to expand financial access and improve economic efficiency and productivity, thereby supporting broader growth. This transformation largely draws on Western financial models, where AI has already been integrated into core institutional practices. However, this reliance on imported technological frameworks raises a critical concern: the AI systems being implemented are largely designed for vastly different economic and social environments, and may not align with Egypt’s unique financial realities. As a result, rather than resolving structural inequalities between banked populations and the informal economy, these technologies risk reinforcing or even deepening them.
The vast majority of AI financial models deployed in Egypt today are generative large language models (LLMs) trained on datasets drawn primarily from Western economies. Within these models, the raw data used for predictive outcome modeling is based on borrowers who have formalized bank accounts with documented salary histories and traceable credit records. In Egypt, according to a recent report on AI conducted by the Organisation for Economic Co-operation and Development (OECD), formalized banking is not considered the norm. Approximately 67% of Egyptian adults remain informally recognized, placing them in the unbanked or underbanked categories. This mismatch means that these models risk misclassifying or excluding the very populations they are meant to serve.
On top of this, a significant portion of the country’s economic activity flows through informal channels: street traders, family lending networks, and rotating savings groups known locally as gam’eyas. The gam’eya is a traditional informal savings scheme in Egypt where a group of individuals each contributes a fixed sum of money on a monthly basis, with participants taking turns to receive the pooled total. These undocumented savings circles represent a sophisticated form of community-based financial infrastructure. Repayment discipline is enforced not by contracts or collateral, but with a common social trust built into informal agreement contracts. High participation and low default rates have allowed the system to sustain Egyptian households for generations, driving broader financial growth and innovation. However, none of this data is captured in any credit bureau—therefore, an AI model has no way to recognise a gam’eya participant’s track record, producing systematically biased assessments of creditworthiness.
Researchers refer to this problem as the “algorithmic blind spot”, where an AI model encounters a borrower whose financial profile does not conform to its training data. The concept of LLMs as stochastic parrots, a term coined by sociology professor Emily Bender, suggests that they fail to account for meaningful differences across contexts, leading to the systematic punishment of borrowers whose profiles deviate from the training data. Consider the example of a small business owner in Cairo: even if they consistently repay obligations on time within a community of trust, they may nonetheless be classified as high-risk simply because they lack a credit card or a formal payslip. The LLMs are not measuring their creditworthiness, but rather using a comparison system to measure their resemblance to a Western borrower.
At a systemic level, the adoption of AI models that misinterpret or overlook the dynamics of Egypt’s informal economy risks distorting the allocation of capital across the financial system. When risk is inaccurately assessed, investment is more likely to be directed toward already established and data-rich sectors, while informal but economically vital activities are neglected. This creates a structural bias against small and medium enterprises, particularly those led by rural entrepreneurs and first-generation entrants into the formal economy. Roughly 18% of the adult population is made up of self-run business owners. Rather than facilitating inclusion, the use of poorly adapted AI systems may actively restrict upward mobility by denying these groups access to credit and financial services. In this way, this adoption of Western-trained models does not merely produce inefficiency—it will lead to the reinforcement of existing inequalities under the guise of technological advancement.
Recently, World Bank researchers have begun to work on a fairness-aware machine in order to develop techniques that address exactly this kind of structural bias. By identifying and down-weighting characteristics that act as proxies for social exclusion—like the absence of a credit card—instead of the presence of financial irresponsibility, these LLMs can be recalibrated to assess creditworthiness more accurately in contexts where formal financial infrastructure is limited. Currently, what is missing is the institutional will to deploy it and take the risk. This is an area where policy intervention is especially important. The Central Bank of Egypt (CBE) has already shown promise for this type of innovation through its regulatory sandbox framework, which has allowed fintech companies to test out new products in a controlled environment using discarded data. Expanding this model to specifically need localized training data and bias audits for any AI credit model would be a significant first step. Through initiatives such as partnering with Egypt’s large mobile wallet providers, whose transaction data captures the informal economy far better than any bank statement or large data collection, Egypt can create more accurate raw datasets which actually reflect its financial realities.
The assumption that generative AI will seamlessly modernize Egypt’s financial system risks overlooking how deeply these models are shaped by foreign economic structures. Algorithms that drive financial decision-making are not neutral; when trained on Western data, they inherently reward Western financial behaviours while misinterpreting or marginalizing local practices. As one of the more ambitious adopters of AI in finance, Egypt has the opportunity not simply to follow existing models, but to question and reshape them, setting a critical precedent for AI integration in developing economies. Achieving this will require policymakers to move beyond adoption and toward adaptation, ensuring that these systems are built on assumptions that reflect Egypt’s own economic structures, rather than importing biases that could entrench inequality under the guise of progress.
Charlotte Long is a sophomore studying economics and sociology at Columbia College. More specifically, she is interested in emerging commodities markets in countries such as Egypt. She is the head of the Columbia International Business Club and has been writing for CEMR for one semester. In her free time, she is also a member of the Columbia Women’s rowing team.




