经济学人2019.8.17/Fooling Big Brother

Fooling Big Brother
戏弄“老大哥(有独|裁|者的含义,文章说出了借计算机面部识别系统来组建的监视/识别网络)”
As face-recognition technology spreads, so do ideas for subverting it
随着人脸识别技术的普及,破坏它的想法也在不断涌现
词汇
Subvert/颠覆;推翻;破坏
They work because machine vision and human vision are different
“破坏”之所以有效,是因为机器视觉和人类视觉是不同的

Aug 17th 2019
POWERED BY advances in artificial intelligence (AI), face-recognition systems are spreading like knotweed. Facebook, a social network, uses the technology to label people in uploaded photographs. Modern smartphones can be unlocked with it. Some banks employ it to verify transactions. Supermarkets watch for under-age drinkers. Advertising billboards assess consumers’ reactions to their contents. America’s Department of Homeland Security reckons face recognition will scrutinise 97% of outbound airline passengers by 2023. Networks of face-recognition cameras are part of the (敏感删除) China has built in Xinjiang, in the country’s far west. And a number of British police forces have tested the technology as a tool of mass surveillance in trials designed to spot criminals on the street.
在人工智能(AI)进步的推动下,人脸识别系统正在像杂草一样普及。社交网络Facebook使用这项技术在上传的照片中给人贴上标签。现代智能手机可以用它解锁。一些银行用它来验证交易。超市留意未成年饮酒者。广告牌借此检测消费者对其内容的反映。美国国土安全部估计,到2023年,97%的出境航班乘客将接受人脸识别检查。人脸识别摄像头网络是中国在遥远的西部新疆建立的一部分。一些英国警察已经测试了这种技术,将其作为大规模监控的工具,用于在街道上发现罪犯。
词汇
Knotweed/紫菀科植物;蓼科杂草
Scrutinize/作仔细检查;细致观察
A backlash, though, is brewing. The authorities in several American cities, including San Francisco and Oakland, have forbidden agencies such as the police from using the technology. In Britain, members of parliament have called, so far without success, for a ban on police tests. Refuseniks can also take matters into their own hands by trying to hide their faces from the cameras or, as has happened recently(敏感删除), by pointing hand-held lasers at CCTV cameras. to dazzle them (see picture). Meanwhile, a small but growing group of privacy campaigners and academics are looking at ways to subvert the underlying technology directly.
然而,一场抵制正在酝酿之中。包括旧金山和奥克兰在内的几个美国城市的当局已经禁止警察等机构使用这种技术。在英国,议会成员呼吁禁止治安测试,但至今未获成功。反抗者也可以用自己的方式解决问题,比如试图把自己的脸藏起来不让摄像头看到,或者用手持激光对准闭路电视摄像机,就像最近(敏感删除)。让他们眼花缭乱(标题图已更替)。与此同时,一个规模虽小但不断壮大的隐私维权人士和学者团体正在寻找直接颠覆基层技术的方法。
词汇
Backlash/反冲;强烈抵制
Refusenik/拒绝者,反抗者
Put your best face forward
将你最棒的一面置于前方
Face recognition relies on machine learning, a subfield of AI in which computers teach themselves to do tasks that their programmers are unable to explain to them explicitly. First, a system is trained on thousands of examples of human faces. By rewarding it when it correctly identifies a face, and penalising it when it does not, it can be taught to distinguish images that contain faces from those that do not. Once it has an idea what a face looks like, the system can then begin to distinguish one face from another. The specifics vary, depending on the algorithm, but usually involve a mathematical representation of a number of crucial anatomical points, such as the location of the nose relative to other facial features, or the distance between the eyes.
人脸识别依赖于机器学习(machine learning),这是人工智能的一个子领域,在这个领域中,计算机自学完成程序员无法明确解释的任务。首先,一个系统是针对成千上万的人脸样本进行训练的。在识别正确人脸时给予奖励,在错误识别后进行惩罚,可以教会它区分包含人脸的图像和不包含人脸的图像。一旦它知道一张脸是什么样子的,系统就可以开始区分一张脸和另一张脸。具体情况因算法而异,但通常涉及到一些关键解剖点的数学表示,比如鼻子相对于其他面部特征的位置,或者眼睛之间的距离。
词汇
Anatomical/ 解剖的;解剖学的;结构上的
In laboratory tests, such systems can be extremely accurate. One survey by the NIST, an America standards-setting body, found that, between 2014 and 2018, the ability of face-recognition software to match an image of a known person with the image of that person held in a database improved from 96% to 99.8%. But because the machines have taught themselves, the visual systems they have come up with are bespoke. Computer vision, in other words, is nothing like the human sort. And that can provide plenty of *****s in an algorithm’s armour.
在实验室测试中,这样的系统可以非常精确。美国标准制定机构NIST的一项调查发现,从2014年到2018年,人脸识别软件将已知人物的图像与数据库中此人的图像匹配的能力从96%提高到了99.8%。但是因为机器是自学的,所以他们设计的视觉系统是定制的。换句话说,计算机视觉与人类的视觉完全不同。这可以为算法的装甲提供很多漏洞。
词汇
Bespoke/定做的,定制的
*****/裂缝;漏洞
In 2010, for instance, as part of a thesis for a master’s degree at New York University, an American researcher and artist named Adam Harvey created “CV [computer vision] Dazzle”, a style of make-up designed to fool face recognisers. It uses bright colours, high contrast, graded shading and asymmetric stylings to confound an algorithm’s assumptions about what a face looks like. To a human being, the result is still clearly a face. But a computer—or, at least, the specific algorithm Mr Harvey was aiming at—is baffled.
例如,2010年,在纽约大学硕士学位论文的一部分中,一位名叫亚当•哈维(Adam Harvey)的美国研究人员兼艺术家创造了“让计算机视觉眼花(CV [computer vision] Dazzle)”,这是一种旨在欺骗面部识别装置的化妆风格。它使用明亮的颜色、高对比度、渐变阴影和不对称的样式来打乱算法对人脸外观的假设。对一个人来说,结果还是一张清晰的脸。但是一台计算机——或者至少是哈维先生所瞄准的特定算法——却束手无策。
词汇
Asymmetric/不对称的;非对称的
Dramatic make-up is likely to attract more attention from other people than it deflects from machines. HyperFace is a newer project of Mr Harvey’s. Where CV Dazzle aims to alter faces, HyperFace aims to hide them among dozens of fakes. It uses blocky, semi-abstract and comparatively innocent-looking patterns that are designed to appeal as strongly as possible to face classifiers. The idea is to disguise the real thing among a sea of false positives. Clothes with the pattern, which features lines and sets of dark spots vaguely reminiscent of mouths and pairs of eyes (see photograph), are already available.
戏剧性的化妆更容易吸引别人的注意却也会降低计算机的关注度。亢奋的脸(HyperFace)是哈维先生的一个新项目。“让计算机视觉眼花”的目的是改变人脸,而亢奋的脸的目的是将人脸隐藏在几十个假脸中。它使用块状的、半抽象的和相对无害的模式,这些模式的设计是为了尽可能强烈地吸引脸部分类器。这样做的目的是在一大堆错误信息中掩盖真实情况。该图案的衣服已经上市,衣服上的线条和黑色斑点让人隐约想起嘴巴和眼睛(见图)。
词汇
false positives/误报;假阳性;主动错误信息
Vaguely/含糊地;暧昧地;茫然地
Reminiscent/怀旧的,回忆往事的

An even subtler idea was proposed by researchers at the Chinese University of Hong Kong, Indiana University Bloomington, and Alibaba, a big Chinese information-technology firm, in a paper published in 2018. It is a baseball cap fitted with tiny light-emitting diodes that project infra-red dots onto the wearer’s face. Many of the cameras used in face-recognition systems are sensitive to parts of the infra-red spectrum. Since human eyes are not, infra-red light is ideal for covert trickery.
香港中文大学、印第安纳大学布卢明顿分校和中国大型信息技术公司阿里巴巴的研究人员在2018年发表的一篇论文中提出了一个更微妙的想法。这是一顶棒球帽,上面装有微型发光二极管,可以将红外光点投射到佩戴者的脸上。人脸识别系统中使用的许多相机对红外光谱的某些部分很敏感。由于人类的眼睛不是这样,红外线是隐蔽欺骗的理想光源。
词汇
Subtle/微妙的;精细的
Covert/隐蔽的,秘密的
In tests against FaceNet, a face-recognition system developed by Google, the researchers found that the right amount of infra-red illumination could reliably prevent a computer from recognising that it was looking at a face at all. More sophisticated attacks were possible, too. By searching for faces which were mathematically similar to that of one of their colleagues, and applying fine control to the diodes, the researchers persuaded FaceNet, on 70% of attempts, that the colleague in question was actually someone else entirely.
在针对由谷歌开发的人脸识别系统FaceNet的测试中,研究人员发现,适量的红外线照射能够可靠地阻止计算机识别出自己正在看的是一张脸。更复杂的攻击也是可能的。通过寻找在数学上与他们的一位同事相似的面孔,并对二极管进行精细的控制,研究人员在70%的实验中让FaceNet信服了,这个测试中的同事实际上完全是另一个人。
Training one algorithm to fool another is known as adversarial machine learning. It is a productive approach, creating images that are misleading to a computer’s vision while looking meaningless to a human being’s. One paper, published in 2016 by researchers from Carnegie Mellon University, in Pittsburgh, and the University of North Carolina, showed how innocuous-looking abstract patterns, printed on paper and stuck onto the frame of a pair of glasses, could often convince a computer-vision system that a male AI researcher was in fact Milla Jovovich, an American actress.
训练一种算法去欺骗另一种算法被称为对抗性机器学习。这是一种富有成效的方法,创建的图像会误导计算机的视觉,而对人类的视觉却毫无影响。出版于2016年的一篇论文来自在匹兹堡的卡内基梅隆大学和北卡罗莱纳大学的研究人员们表明如何让看起来无害的抽象图案打印在纸上然后粘在眼镜的框架上能够使计算机视觉系统相信一个男性人工智能研究院“实际上”是米拉·乔沃维奇这位美国女演员。
词汇
Adversarial/对抗的
Innocuous/无害的;无伤大雅的
In a similar paper, presented at a computer-vision conference in July, a group of researchers at the Catholic University of Leuven, in Belgium, fooled person-recognition systems rather than face-recognition ones. They described an algorithmically generated pattern that was 40cm square. In tests, merely holding up a piece of cardboard with this pattern on it was enough to make an individual—who would be eminently visible to a human security guard—vanish from the sight of a computerised watchman.
在7月计算机视觉会议上发表的一篇类似论文中,比利时鲁汶天主教大学的一组研究人员愚弄了人员识别系统,而不是人脸识别系统。他们描述了一个由算法生成的40厘米平方的图案。在测试中,仅仅举起一块印有这种图案的纸板就足以让一个人从电脑看守人的视线中消失,而这个人对人类安全警卫来说是显而易见的。
词汇
Eminently/突出地;显著地
As the researchers themselves admit, all these systems have constraints. In particular, most work only against specific recognition algorithms, limiting their deployability. Happily, says Mr Harvey, although face recognition is spreading, it is not yet ubiquitous—or perfect. A study by researchers at the University of Essex, published in July, found that although one police trial in London flagged up 42 potential matches, only eight proved accurate. Even in China, says Mr Harvey, only a fraction of CCTV cameras collect pictures sharp enough for face recognition to work. Low-tech approaches can help, too. “Even small things like wearing turtlenecks, wearing sunglasses, looking at your phone [and therefore not at the cameras]—together these have some protective effect”.
正如研究人员自己所承认的,所有这些系统都有限制。特别是,大多数只针对特定的识别算法工作,限制了它们的可部署性。令人高兴的是,哈维先生说,尽管人脸识别正在传播,但它还没有普及,或者说还不完美。埃塞克斯大学的研究人员在7月发表的一项研究发现,尽管伦敦警方的一项试验发现了42名潜在的(罪犯)配对者,但只有8名被证明是正确的。哈维表示,即使在中国,也只有一小部分闭路电视摄像头能收集到足够清晰的图像,使人脸识别能够工作。低技术含量的方法也有帮助。“即使是像穿高领毛衣、戴太阳镜、看手机(而不是看相机)这样的小事,加在一起也有一定的保护作用。”
词汇
Deployability/可部署性
flag up/指出
Turtlenecks/圆翻领;高翻领毛衣