To: bull_dozer who wrote (189264 ) 6/27/2022 12:20:08 AM From: TobagoJack 1 RecommendationRecommended By marcher
Read Replies (1) | Respond to of 217567 Am guessing that ‘they’ are working out the details in Australia to rollout across 5Eyes domains Inevitable and inexorable Is the waybbc.com The nation where your 'faceprint' is already being tracked Clearview has created a searchable database of 20 billion facial images, largely by scraping photos from social media without consent. Ton-That has said the company will not work with authoritarian governments such as China, North Korea and Iran. However, it has encountered problems in some democracies. It has been banned in Canada and Australia and on 24 May, the UK's Information Commissioner's Office (ICO) fined it more than £7.5m (US$9.1m) , following a joint investigation with the Office of the Australian Information Commissioner. It was ordered to delete the data of British residents from its systems. In December 2021, France's privacy watchdog found that Clearview breached Europe's General Data Protection Regulation (GDPR). Santow says that the aim in Australia is to develop a nuanced approach which encourages the use of positive applications and to impose guardrails to prevent harms. The worst-case scenario would be to replicate the "social credit" system in China, where individuals and organisations are tracked by the government to determine their "trustworthiness". "In determining whether a use is beneficial or harmful, we refer to the basic international human rights framework that exists in almost every jurisdiction in the world," says Santow. For example, the law would require free and informed consent for facial recognition to be used. However, if the technology was causing discrimination to occur through its inaccuracy in relation to certain groups, the consent would become irrelevant. As Santow says: "You cannot consent to being discriminated against.”Increasingly sophisticated and powerful "In the next couple years, we're going to see a big shift away from people using passwords, which are totally insecure. Biometrics will become the default," says O'Hara. Facial recognition works by dividing the face into a series of geometric shapes and mapping the distances between its "landmarks", such as the nose, eyes and mouth. These distances are compared with other faces and turned into a unique code called a biometric marker. "When you use a facial recognition app to open your phone, it isn't a picture of your face that your phone stores," explains Garrett O'Hara, field chief technologist at security company Mimecast. "It stores an algorithmic derivation of what your face is mathematically. It looks like a long code of letters and numbers." Facial recognition has come a long way since it was first developed in the 1960s, although the error rate varies significantly between different systems used today. At first it was unable to distinguish between siblings, or the changes in a person's face as they aged. It is now so sophisticated that it can identify someone wearing a facemask or sunglasses, and it can do so from more than a kilometre away . The best face identification algorithm has an error rate of just 0.08% , according to tests by the National Institute of Standards and Technology . However, this level of accuracy is only possible in ideal conditions, where the facial features are clear and unobscured, the lighting is good and the individual is facing the camera. The error rate for individuals captured "in the wild" can be as high as 9.3%. "It's incredibly useful technology. But if somebody had asked us 20 years ago when the worldwide web was starting up if we wanted to want to live in a world where our interactions and activity were collected and tracked, the majority of us probably would have said that it sounded creepy," says O'Hara. "We're now replicating the tracking of online space to include physical space as well. And we're not asking the questions about it that we should be." One of its most problematic aspects is its potential for racial discrimination and bias . Most facial recognition applications were initially trained using data sets that were not representative of the full breadth of the community. "In the early days, the datasets being used were all taken from white males or white people in general," says O'Hara. "And clearly, it leads to problems when you've got people of colour or of different ethnicities or backgrounds that don't match the training models. At the end of the day, it's just mathematics. This is the problem." Sent from my iPhone