Other News from Participating Centers

Michael Latzer @ Forum Zukunft Bildung 2018

Michael Latzer @ Forum Zukunft Bildung 2018

New Team Members - Michael Reiss and Tanja Rüedy

New Team Members - Michael Reiss and Tanja Rüedy

Neues Video zum Medienwandel in der Schweiz

Neues Video zum Medienwandel in der Schweiz

Open Positions @ MC&I

Open Positions @ MC&I

Algorithmic decision making and online platforms - Michael Latzer @ European Commission workshop

Algorithmic decision making and online platforms - Michael Latzer @ European Commission workshop

Focusing on Digital Inequality Outcomes - Moritz Büchi @ workshop

Focusing on Digital Inequality Outcomes - Moritz Büchi @ workshop

Öffentlicher Workshop: Künstliche Intelligenz in unserem Alltag

Öffentlicher Workshop: Künstliche Intelligenz in unserem Alltag

Verkaufte Datenseele - Interview mit Michael Latzer über Algorithmen

Verkaufte Datenseele - Interview mit Michael Latzer über Algorithmen

Digital Well-Being – Moritz Büchi starting project as Digital Society Initiative Fellow

Digital Well-Being – Moritz Büchi starting project as Digital Society Initiative Fellow

Perceived Surveillance Leads to Self-Censorship – WIP-CH 2019 Reports Published

Perceived Surveillance Leads to Self-Censorship – WIP-CH 2019 Reports Published

We’re Banning Facial Recognition. We’re Missing the Point.

In a world where our data allows us to be consistently identified over time, bans on facial recognition aren't enough writes Bruce Schneier.

“Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point,” says Schneier. “We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.”

The Secretive Company That Might End Privacy as We Know It

Faculty associate Woodrow Hartzog spoke to The New York Times about the harrowing consequences of facial recognition

“We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Hartzog said. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

The Justice Department’s new quarrel with Apple

The deadly December shooting of three U.S. sailors at a Navy installation could reignite a long-simmering fight between the federal government and tech companies over data privacy and encryption.

“They’re just public shaming and asking nicely,” said Bruce Schneier. “Hurting everybody’s security for some forensic evidence is a dumb tradeoff.”

Read more from The Washington Post

Meta analysis shows AI ethics principles emphasize human rights

One of the trends that came into sharp focus in 2019 was, ironically, a woeful lack of clarity around AI ethics. The AI field at large was paying attention to ethics, creating and applying frameworks for AI research, development, policy, and law, but there was no unified approach.  A team of researchers from BKC recently released a white paper and visualization that mapped AI principles and guidelines to find consensus. 

Read more from Venture Beat

Google and Microsoft shouldn’t decide how technology is regulated

Jessica Fjeld, lead author of the recent BKC report Principled Artificial Intelligence, warns that giving too much credence to Big Tech is like “asking the fox for guidance on henhouse security procedures.”`

How YouTube shields advertisers (not viewers) from harmful videos

The difference between the protections YouTube offers its advertisers and those it provides consumers is stark. 

Jonas Kaiser notes that YouTube faces questions of censorship and freedom of speech when it comes to what videos are permitted on the platform. “The relationship YouTube has with advertisers is more straightforward,” he says, adding that YouTube protects itself from suffering financially by working to remove ads from harmful content. 

One State May Become the First to Ban Law Enforcement Use of Genealogy Databases

A state lawmaker in Utah wants police to stop using consumer genealogy databases to help them find criminals.

Jasmine McNealy, faculty associate, said that law enforcement accessing personal data held by third parties is not a new legal debate. “We’ve seen this problem with banking and cell phone data for a long time,” she said. “But with DNA we immediately see the implications. It needs a higher privacy standard.”

Read more from Route Fifty

 

The Smart Enough City

The “smart city,” presented as the ideal, efficient, and effective for meting out services, has captured the imaginations of policymakers, scholars, and urban-dwellers. But what are the possible drawbacks of living in an environment that is constantly collecting data?

Ben Green joins Jasmine McNealy to discuss his book The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future.

Digital Public Infrastructure… and a few words in defense of optimism

Ethan Zuckerman contributed to a series of essays from the Knight First Amendment Institute called “The Tech Giants, Monopoly Power, and Public Discourse.”

“At these moments of technological shift, it’s easy to assume that the business models adopted by technological innovators are inevitable and singular. They are not.”

Which Tech Companies Are Doing the Most Harm?

Mutale Nkonde joined Slate's technology podcast What Next: TBD to discuss Alphabet and inherent bias.

“You can effectively use Google products in every single area  of your life, and the underlying algorithms are going to have problems of bias not because Google is a terrible company or the computer scientists are racist, it’s just the fact that they are using societal data and our data has inherent biases.”

Listen to the podcast from Slate