Posts in Category

AI

As cryptocurrency adoption gains steam on the African continent, it will be important for potential investors—and ultimately, regulators—to learn from the scams that have come before. Some high-profile examples of these scams include; back in 2019, Uganda’s Dunamiscoin Resources closed suddenly with $2.7 million in investor money. Dunamiscoin Resources had taken money from more than 4,000 people, promising them returns of 30% returns in 21 days by investing it in bitcoin. The returns never came. Kenya Another is Velox 10 Global, a pyramid scheme with roots in Brazil, in which Kenyans lost

Facial recognition company Clearview AI, popular for gathering images from the internet to create a global facial recognition database, has been fined more than £7.5 million by the UK’s privacy watchdog. The fine comes just months after a trio of senators called on federal agencies to stop using facial recognition technology built by Clearview AI. In letters signed by Sens. Edward Markey and Jeffrey Merkley, as well as House Reps Pramila Jayapal and Ayanna Presley, the technology was said to pose “unique threats” to Black communities, other communities of color,

Back in April 2021, João Gualberto, the district mayor of Mata de São João, held an in-person auction letting Brazilian technology companies bid for a contract to supply facial recognition technology for the public school system. The $162,000 tender was won by PontoiD, and in July that year, two public schools — João Pereira Vasconcelos and Celia Goulart de Freitas — began secretly rolling out the facial recognition system, without informing parents or students in advance, according to research by Rest Of World. Students were registered on the system, which

Clark Atlanta has announced that it has been awarded nearly $12 million in grant funding to establish a “Knowledge Metaverse” hub. The Knowledge Metaverse, according to a school release, “amplifies access and engagement in learning by combining the real world with digital information and extended reality (XR) similar to immersive experiences that have become increasingly popular in arts, gaming, and entertainment.” The grant was supplied by EON Reality, described by the school as “the global leader in augmented and virtual reality learning solutions.” Clark Atlanta is the first HBCU to

People in the US city of New York are subject to “shocking” mass surveillance through facial recognition technology cameras, with the invasive technology especially trained on areas of the city with greater concentrations of non-white residents, new research by Amnesty International and partners has revealed today. The new analysis – published as part of a global Ban The Scan campaign – shows how the New York Police Department’s vast surveillance operation particularly affects people already targeted for stop-and-frisk across the city’s five boroughs.  In the Bronx, Brooklyn, and Queens the research shows

New York City is making a bold move by introducing a new law to combat race and gender bias in hiring processes when businesses use artificial intelligence tools to screen out job candidates. Under the new law – employers in the city will be banned from using automated employment decision tools to screen job candidates unless the technology has been subject to a “bias audit” conducted a year before using the tool. The new act passed the measure on November 10 and it takes effect on January 2, 2023. A PricewaterhouseCoopers 2017 study found

A tech company that provides human resource training to large corporations has just been revealed to be using white actors to portray people of color within sessions about diversity, equity, and inclusion. During the training sessions, there were reportedly scenarios where Child Protective Services removed a child from a Black family and in each case, white actors played the roles of the Black characters. In other VR simulations, white actors played characters of Asian descent, and neurotypical adults played autistic children. Mursion, a corporate education company that has clients including Coco cola and Starbucks, has

Black Lives Matter (BLM) co-founder Opal Tometi has urged the tech sector to take robust action against perpetuating racism in systems such as facial recognition. “A lot of the algorithms, a lot of the data is racist,” U.S. activist Tometi, who co-founded BLM in 2013, told Reuters on the sidelines of Lisbon’s Web Summit. “We need tech to truly understand every way it (racism) shows up in the technologies they are developing,” she said. Her comments come just a day after Facebook announced it was shutting down its facial recognition

 Facebook is planning to shut down its face-recognition system and delete faceprints of more than 1 billion people. At the present moment – more than a third of Facebook’s daily active users have opted to have their faces recognized by the social network’s system. That’s about 640 million people. But according to AP – it recently began scaling back its use of facial recognition after introducing it more than a decade ago. The move comes years after organizations and people of color complained about how problematic AI and facial recognition

1 2 3 4 Page 1 of 4