What’s new in tech
U.S. warns about China’s Huawei
“The heads of six major U.S. intelligence agencies have warned that American citizens shouldn’t use products and services made by Chinese tech giants Huawei and ZTE,” said James Vincent in TheVerge.com. The warning came during a Senate Intelligence Committee hearing last week featuring heavyweights from the FBI, CIA, and NSA as well as the director of national intelligence. The U.S. intelligence community says it remains concerned about the connections between the companies and the Chinese government. FBI Director Chris Wray said the companies’ products could be used to “maliciously modify or steal information” from users or “to conduct undetected espionage.” Huawei, founded by a former Chinese People’s Liberation Army engineer, last year “surpassed Apple as the world’s second-biggest smartphone maker, behind Samsung.”
LinkedIn adds salaries to job listings
Employers are often reluctant to reveal starting salaries when advertising job vacancies, said David Cohen in Ad Week. That’s a significant source of frustration for job seekers, who can dedicate hours to applications and interviews only to find that a job’s salary does “not meet their expectations.” LinkedIn says it aims to mitigate that frustration with the rollout of Salary Insights, which will include “estimated or expected salary ranges” for all open roles. The salary data will be provided either by employers or generated “from data submitted by LinkedIn members.” LinkedIn will also connect members with job listings based on the salary information in their confidential profiles.
Facial recognition’s racial bias problem
“Facial recognition technology is improving by leaps and bounds,” said Steve Lohr in The New York Times. “But the darker the skin, the more errors arise.” An MIT study measuring the technology’s ability to recognize race and gender found commercial software to be correct 99 percent of the time in determining gender when a photo featured a white male. Yet when the image depicted a “darker-skinned woman,” there was an error rate of nearly 35 percent. Those results, researchers said, show that “AI software is only as smart as the data used to train it.” If there are significantly more white men than black women in a database used to train an AI, the system “will be worse at identifying the black women.” Part of the challenge, scientists say, is that there is so little diversity within the AI community.