Skip to content Skip to footer

Tech Firms Prey on Poor Under Guise of Expanding Access to Financial Services

Finance, technology and social media companies often position themselves as solutions to problems they helped create.

OpenAI CEO Sam Altman walks on the House side of the U.S. Capitol on January 11, 2024, in Washington, D.C.

OpenAI founder Sam Altman has been embroiled in a series of controversies about the company’s decision-making. Most recently, he incorporated actress Scarlett Johansson’s voice without her permission into ChatGPT. Yet before that, Altman helped launch a cryptocurrency project in 2021 called Worldcoin that scanned people’s eyeballs and collected biometric data in exchange for digital money. Despite controversy surrounding the project, Altman believed Worldcoin could enable a crypto-based global universal basic income (UBI), where people receive money on a routine basis with no strings attached. Altman has said of the project, “I’ve been very interested in things like universal basic income and what’s going to happen to global wealth redistribution and how we can do that better. Is there a way we can use technology to do that at global scale?”

Concerned that collecting retina scans and biometric data in exchange for digital money concealed malintent, reporters from MIT Technology Review observed Worldcoin’s registration events and interviewed some of its participants. Their 2022 investigation uncovered “wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced.” At the time, 450,000 people had participated in the project. Of the 24 countries where Worldcoin was operating, over half were developing nations and one-third were located on the African continent. The people targeted by Worldcoin felt they had been coerced into relinquishing their data because they needed money. Despite the company’s promises, many worried that their data would not be kept private.

Worldcoin epitomizes the familiar pattern of disguising finance and technology as solutions to poverty and inequality. The world’s presiding financial institution, the International Monetary Fund (IMF), professes to reduce poverty and promote economic growth using the tools of finance. However, the IMF imposes austerity measures that require countries to reduce their social spending in exchange for access to capital, essentially forcing cuts to the very programs intended to address poverty. The IMF can also charge high interest rates on loans. For African nations, many whose wealth was plundered during periods of colonization, debt from the IMF worsens economic crises as opposed to providing relief, as Grieve Chelwa and colleagues with the Collective on African Political Economy have explained. But somehow, Worldcoin is supposed to address these deeply rooted problems by scanning people’s eyeballs.

Today, some 4.5 million people in 120 countries have relinquished their biometric data to Worldcoin. And while it’s still unclear exactly how Worldcoin intends to use retina scans and biometrics for identification, technology companies have a track record of monetizing the sensitive data they once promised to encrypt or delete. Several places, including Kenya, Portugal, Spain and Hong Kong, have since paused or banned Worldcoin’s biometric data collection out of precaution. And, despite having headquarters in the United States and Germany, Worldcoin is prohibited from operating or under heavy scrutiny in both countries.

In Kenya, Nairobi News raised concerns about Worldcoin with a headline referring to “data colonisation,” drawing a parallel between the cryptocurrency project’s activities and Western and European countries’ centuries-long efforts to exploit African countries and extract profits. These concerns are similar to the ways legal scholar Joy Malala describes financial technology companies’ projects in the Global South: taking advantage of “new markets for western neoliberal ideas.” By collecting the biometric data of poor people living in countries experiencing economic challenges and with histories of being colonized, Worldcoin appears to have simply invented another way of colonizing and extracting profits — all while promising to help.

Despite these concerns, Kenya recently dropped a police investigation into data privacy issues and opened the door for Worldcoin to resume collecting biometric data.

The gap between Worldcoin’s public messaging and users’ experiences is consistent with a trend among financial technology companies who perform benevolence through a stated commitment to equity. These performances help to obfuscate their intentions to extract and profit from user information.

In May, we published a study in Socio-Economic Review that maps this trend over nearly 30 years by analyzing finance, technology and social media companies’ newswires and press releases. Our analysis documents the promises that these companies make about their products and services.

Finance, technology and social media companies often claim that digital technologies can solve problems of poverty or financial access. Companies contend that products and services like online bank accounts and payment platforms can help people pay rent and utility bills, send money to friends and family, and borrow for emergencies. And — following a similar pattern to Worldcoin — women, Black and Brown people, immigrants and people earning low incomes and living in the Global South are frequent targets of these claims.

Rather than aiding their clients, these companies threaten to expose people to new forms of predation and extraction already established by traditional banks and lenders.

What’s more, these companies position themselves as solutions to problems of poverty and inequality — problems they helped create.

Take racial wealth inequalities as one example. Banks’ racist lending practices are well-documented, from helping white people in the United States build wealth by capitalizing on chattel slavery to offering low interest rates on mortgage loans and overestimating property values. Meanwhile, banks often deny loans to Black customers altogether or make profits by charging exorbitant interest rates. Despite an abysmal track record, banks and other financial institutions insist their banking, lending and credit scoring services are key to helping Black people save money and accumulate wealth.

Now, against a backdrop of growth in finance and technology industries that contributes to widening inequality, companies are introducing new products and services for the supposed purpose of expanding access to capital for Black and other marginalized communities — and, prompting renewed concerns about predation and extraction. Companies introduce their new products and services by repackaging the old problems created by finance and technology.

We found companies take advantage of the advent of digital technologies over time to introduce new products and services that are similar to the originals, under the guise of expanding access. Companies described direct payroll deposit, online banking and electronic money transfers in the late 1990s and early 2000s as expanding access and giving people greater control over their finances. Yet, companies described smartphone applications, payment platforms and cryptocurrencies in the mid-to-late 2010s in the exact same ways.

In 1995, Online Resources promised to give people “greater control of their finances” through their electronic money transfer services and, in 1997, Block Financial Corporation’s president claimed underwriting loans online would help borrowers to “maximize their financial freedom.” Presumably, debt would set customers free. In 2020, Ally Financial Inc. promised to “tackle America’s savings challenge,” echoing a decades-long concern about savings rates in the United States and referencing the nearly half of people who struggle to come up with $400 in an emergency. Though, the companies did not offer any data points showing the effectiveness of their products and services.

Our study did not directly test outcomes like if customers’ access improved or they had greater control of their finances. However, companies made big promises and consistently failed to offer proof of their claims. It is reasonable to question what outcomes, if any, were achieved.

Intuit Inc., a financial technology company known for products including QuickBooks, TurboTax and Credit Karma, showed up regularly in our data. In 1999, Intuit Inc. announced in a press release they were making their online tax software free to low-income filers. The company claimed the free tax filing software would “help bridge this country’s ‘technology gap’” and make tax preparation “affordable and accessible” for people with annual incomes under $20,000. Intuit Inc.’s CEO at the time, Bill Harris, praised the program, saying, “We think it is important for everyone in the country to be able to enjoy the great benefits of the new digital revolution.”

Yet, a multiyear ProPublica investigation published two decades later, in 2019, showed how Intuit Inc. used promises of free online tax software to discourage the Internal Revenue Service (IRS) from launching its own free online filing system. What’s more, Intuit Inc. sometimes charged $200 or more for its supposed free filing and intentionally made the company’s free filing page hard to find through Google and other search engines. For people who actually managed to find the free online filing page, Intuit Inc. used the opportunity to hawk loans and other costly financial products.

An Intuit Inc. spokesperson responded to ProPublica’s reports by saying, “We empower our customers to take control of their financial lives, which includes being in charge of their own tax preparation.” This statement, like the financial technology company’s business practices, obscured the ways Intuit Inc. deceived customers and undermined their empowerment.

Companies used language that disguises the ways people can be separated from their data and information. This pointed to a potential gap between companies’ public messaging and users’ experiences, similar to the ways people described being conflicted about giving their biometric data to Worldcoin. Companies introduced authentication software in post-apartheid South Africa, operated encryption algorithms in emerging markets in 180 countries, and enabled electronic remittances for sending money to family and friends in Bangladesh, Brazil, India, Pakistan and the Philippines.

Piñata, a U.S.-based financial services and membership rewards program for renters, landlords and property managers, launched in mid-2020. Piñata claimed to help renters improve their credit scores by reporting their rental payments directly to major credit bureaus, similar to how homeowners can build credit by making monthly mortgage payments. The company also promised renters a “secure platform” for connecting their bank account and easily “verif[ying] their payments.” As CEO Lily Liu explained, “Building credit is a daunting task for renters, disproportionately low-income and minority Americans, who haven’t had the same opportunity to build credit.”

However, in reading between the lines of the press release, Piñata’s services relied on collecting and sharing its customers’ data even as the company promised a secure verification process. What’s more, Piñata also advertised the app to landlords and property managers. As the company explained, “Landlords and property managers can also utilize Piñata as a new amenity that incentivizes key renter behavior such as reducing late payments and reporting maintenance issues.” Even though Piñata tried to allay any concerns about data security, people’s data were likely shared in new ways with the very entities making decisions about where they lived and slept.

In using words like “verification” and “authentication” and “encryption,” companies attempted to promote a sense of security and privacy. Though, just because data or information is encrypted doesn’t mean that it is unseen or inaccessible. Instead, companies’ claims raise questions about who is afforded the means of access to data or information.

The patterns that emerge in our study show how companies can make promises while simultaneously offering marginalized people up for new and potentially exploitative products and services. For instance, before Walmart was sued by the Federal Trade Commission (FTC) in 2022 for facilitating fraud in its money transfer platform and costing customers over $1.3 billion, the company boasted in a press release of its “commitment to help customers save on money transfers” and its “innovative” platform that was “transparent” and “low-cost.” For years, customers paid for their own victimization since they paid fees to use Walmart’s money transfer services.

Walmart is charged with knowingly letting scammers use their money transfer services to exploit unwitting customers. In one case, scammers targeted elderly people by asking for money and then using Walmart locations to pick up the cash transfers. Scammers sometimes collected hundreds of thousands of dollars from a single transfer or picked up multiple transfers in a single day. The FTC asserts that activities like these should have raised red flags. In another case, scammers identified people as sweepstakes winners and instructed them to send money for claiming their prize. However, victims never received their promised winnings.

The FTC alleged the company lacked a sufficient anti-fraud policy, allowed cash pickups for large payments, failed to properly train their employees and neglected to warn their customers about scams. Despite the significant costs to customers, Walmart declared in a 2019 press release to have actually saved customers over $1 billion. The company claimed their transfer services increased market competition and therefore lowered the overall costs of sending money.

These practices aren’t isolated to a single company. Importantly, and ominously, this was a decades-long trend across companies with products and services ranging from retail banking to housing, education, health care and government services. This can mean major consequences for people who are already subjected to exploitation and discrimination in multiple contexts.

Data Rights

The Consumer Financial Protection Bureau (CFPB) was established in 2010 to enhance financial services regulation in response to the Great Recession. The law governing the bureau includes a provision stating that financial services providers must make a customer’s data available to them upon request. For example, a bank or financial technology company would need to provide a customer with their transaction information and other data gathered in the course of doing business. This is a step in the right direction for scrutinizing financial technology companies’ benevolent promises.

The CFPB is considering a rule on personal financial data rights, which would affect finance, technology and social media companies. However, the proposed rule continues many troublesome data collection practices. Finance, technology and social media companies would still have free reign to store, analyze and use customer data to identify vulnerabilities and target them with products — the definition of predatory behavior. And credit bureaus, which collect massive amounts of data and information on people’s financial activities for calculating credit scores, would be allowed to keep their algorithms private.

The mutually reinforcing trends of an expanding financial technology industry and the enshrining of data collection within financial services places everyone at risk for predation and extraction — even more so for people who are already vulnerable. Worse, these coordinated trends obscure the already nebulous activities of finance, technology and social media companies.

Critical scholars of finance and technology, such as Raúl Carrillo, Tamara Nopper and Chris Gilliard, have suggested providing stronger regulatory oversight, preventing the massive collection and monetization of people’s data and information, and holding companies accountable for their harms. Accountability can include sanctioning companies headquartered in the United States for their harms caused overseas, such as penalizing companies like Worldcoin for any nefarious activities in the Global South. These suggestions are more encompassing and protective than the proposed CFPB rule.

Overall, our work on this issue calls for a stronger critique of finance and technology companies who narrate themselves into stories of equity and economic justice. Claims like those of Worldcoin, accompanied by buzz phrases like “universal basic income” should not be taken at face value. Like all financial institutions, these companies aim to profit from their endeavors.