Michael Latzer @ Forum Zukunft Bildung 2018
New Team Members - Michael Reiss and Tanja Rüedy
Neues Video zum Medienwandel in der Schweiz
Algorithmic decision making and online platforms - Michael Latzer @ European Commission workshop
Focusing on Digital Inequality Outcomes - Moritz Büchi @ workshop
Öffentlicher Workshop: Künstliche Intelligenz in unserem Alltag
Verkaufte Datenseele - Interview mit Michael Latzer über Algorithmen
Digital Well-Being – Moritz Büchi starting project as Digital Society Initiative Fellow
Perceived Surveillance Leads to Self-Censorship – WIP-CH 2019 Reports Published
In a world where our data allows us to be consistently identified over time, bans on facial recognition aren't enough writes Bruce Schneier.
“Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point,” says Schneier. “We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.”
Faculty associate Woodrow Hartzog spoke to The New York Times about the harrowing consequences of facial recognition
“We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Hartzog said. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
The deadly December shooting of three U.S. sailors at a Navy installation could reignite a long-simmering fight between the federal government and tech companies over data privacy and encryption.
“They’re just public shaming and asking nicely,” said Bruce Schneier. “Hurting everybody’s security for some forensic evidence is a dumb tradeoff.”
One of the trends that came into sharp focus in 2019 was, ironically, a woeful lack of clarity around AI ethics. The AI field at large was paying attention to ethics, creating and applying frameworks for AI research, development, policy, and law, but there was no unified approach. A team of researchers from BKC recently released a white paper and visualization that mapped AI principles and guidelines to find consensus.
Jessica Fjeld, lead author of the recent BKC report Principled Artificial Intelligence, warns that giving too much credence to Big Tech is like “asking the fox for guidance on henhouse security procedures.”`
The difference between the protections YouTube offers its advertisers and those it provides consumers is stark.
Jonas Kaiser notes that YouTube faces questions of censorship and freedom of speech when it comes to what videos are permitted on the platform. “The relationship YouTube has with advertisers is more straightforward,” he says, adding that YouTube protects itself from suffering financially by working to remove ads from harmful content.
A state lawmaker in Utah wants police to stop using consumer genealogy databases to help them find criminals.
Jasmine McNealy, faculty associate, said that law enforcement accessing personal data held by third parties is not a new legal debate. “We’ve seen this problem with banking and cell phone data for a long time,” she said. “But with DNA we immediately see the implications. It needs a higher privacy standard.”
The “smart city,” presented as the ideal, efficient, and effective for meting out services, has captured the imaginations of policymakers, scholars, and urban-dwellers. But what are the possible drawbacks of living in an environment that is constantly collecting data?
Ben Green joins Jasmine McNealy to discuss his book The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future.
Ethan Zuckerman contributed to a series of essays from the Knight First Amendment Institute called “The Tech Giants, Monopoly Power, and Public Discourse.”
“At these moments of technological shift, it’s easy to assume that the business models adopted by technological innovators are inevitable and singular. They are not.”
Mutale Nkonde joined Slate's technology podcast What Next: TBD to discuss Alphabet and inherent bias.
“You can effectively use Google products in every single area of your life, and the underlying algorithms are going to have problems of bias not because Google is a terrible company or the computer scientists are racist, it’s just the fact that they are using societal data and our data has inherent biases.”
- 1 of 1103
- next entries »