News
Previous adversarial examples have largely been designed in “white box” settings, where computer scientists have access to the underlying mechanics that power an algorithm. In these scenarios ...
For example, algorithms used in facial recognition technology have in the past shown higher identification rates for men than for women, and for individuals of non-white origin than for whites.
I love both of these examples, because I love the idea that we can take our own democratic action to make the world a bit less complicated. Alas, it is not that simple.
A study published Thursday in Science has found that a health care risk-prediction algorithm, a major example of tools used on more than 200 million people in the U.S., demonstrated racial bias ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results