Friday, 28 January 2022

VB Daily | January 28 - Data Privacy Day 2022: Google celebrates with new tools 🛠️

Daily Roundup
Presented by   
The Lead
[1] Google releases differential privacy tools to commemorate Data Privacy Day 
[2] AI can be used to cheat on programming tests
[3] Metaverse vs. data privacy: A clash of the titans
The Follow
[1] In an effort to make differential privacy tools accessible to more people, Google today announced that it's expanding its differential privacy library to the Python programming language in partnership with OpenMined, an open source community focused on privacy-preserving technologies. The company also released a new differential privacy tool that it claims allows practitioners to visualize and better tune the parameters used to produce differentially private information, as well as a paper, sharing techniques to scale differential privacy to large datasets.
Google's announcement marks a year since it began collaborating with OpenMined and Data Privacy Day, which commemorates the January 1981 signing of Convention 108, the first legally binding international treaty dealing with data protection. Google open-sourced its differential privacy library — which the company claims is used in core products like Google Maps — in September 2019, before the arrival of Google's experimental module that tests the privacy of AI models. >> Read more.
[2] Plagiarism isn't limited to words. Programming plagiarism — where a developer copies code deliberately without attribution — is an increasing trend. According to a New York Times article, at Brown University, more than half of the 49 allegations of academic code violations in 2016 involved cheating in computer science. At Stanford, as many as 20% of the students in a 2015 computer science course were flagged for possible cheating, the article reports.
 A new study finds that freely available AI systems could be used to complete introductory-level programming assignments without triggering the Measure of Software Similarity. In a paper coauthored by researchers at Booz Allen Hamilton and EleutherAI, a language model called GPT-J was used to generate code "lacking any particular tells that future plagiarism detection techniques may use to try to identify algorithmically generated code."
"The main goal of the paper was to contextualize the fact that GPT-J can solve introductory computer science exercises in a realistic threat model for plagiarism in an education setting," Stella Biderman, an AI researcher at Booz Allen Hamilton and coauthor of the study, told VentureBeat. >> Read more.
[3] It may well be another clash of the titans" when the metaverse – such as we understand it now – meets data privacy. The metaverse wants to harvest new, uncharted personal information, even to the point of noting and analyzing where your eyes go on a screen and how long you gaze at certain products. Data privacy, on the other hand, wants to protect consumers from this incessant cherry-picking.
It's too early to know what specific protections it will require as usage evolves, but the reality is we're not starting from the most solid foundation. In many jurisdictions, consumers don't yet have the protections they need for today, let alone for the metaverse and the myriad new ways their data may be used (and abused) tomorrow.
More data means advertisers have a substantially richer cupboard to mine for far deeper targeting, often using the same platforms that are speaking most loudly about the metaverse's potential. >> Read more.
Study: B2B Events Get a Long-Needed Digital Makeover

 

The Buzz
J. Nathan Matias
Algorithm harm bounties attempt to solve social problems by converting them into a market.

This report reviews the history, victories, and risks of bounty systems, while also proposing ideas to manage the significant problems created by making harm-reduction into a market https://t.co/jSL5BeXvxG
Ada Lovelace Institute
'Exploring legal mechanisms for data stewardship' - a joint publication with the UK AI Council - explored three legal mechanisms that could help facilitate responsible data stewardship:
1⃣Data trusts
2⃣Data cooperatives
3⃣Corporate and contractual models

https://t.co/Lhm716Etka
B2B Events Get a Long-Needed Digital Makeover
Sources Say
A new report reveals that as teams rush to expand, container security and usage best practices are sacrificed, leaving openings for attackers. In addition, operational controls lag, potentially resulting in hundreds of thousands of dollars wasted on poor capacity planning. All of these are indicators that cloud and container adoption is maturing beyond early, "expert" adopters, but moving quickly with an inexperienced team can increase risk and cost.
One of the most shocking findings is that 75% of containers have "high" or "critical" patchable vulnerabilities. Organizations do take educated risks to move more quickly; however, 85% of images that run in production contain at least one patchable vulnerability. 
Furthermore, 75% of images contain patchable vulnerabilities of "high" or "critical" severity. This implies a fairly significant level of risk acceptance, which is not unusual for high agility operating models, but can be very dangerous. >> Here's what this may mean for your company.
Did someone share VB Daily with you because they knew you'd love it? Sign up to get top data, AI, and tech news delivered to your inbox every weekday >>
Did you enjoy this issue?
VentureBeat
By VentureBeat

Catch up on VentureBeat's latest top stories.

In order to unsubscribe, click here.
Powered by Revue
500 Sansome St. #404, San Francisco, CA 94111

No comments:

Post a Comment

[New post] Giants

...