[1] In an effort to make
differential privacy tools accessible to more people,
Google today announced that it's expanding its differential privacy library to the Python programming language in partnership with OpenMined, an open source community focused on privacy-preserving technologies. The company also released a new differential privacy tool that it claims allows practitioners to visualize and better tune the parameters used to produce differentially private information, as well as a paper, sharing techniques to scale differential privacy to large datasets.
Google's announcement marks a year since it began collaborating with OpenMined and
Data Privacy Day, which commemorates the January 1981 signing of Convention 108, the first legally binding international treaty dealing with data protection. Google open-sourced its
differential privacy library — which the company claims is used in core products like Google Maps — in September 2019, before the arrival of Google's
experimental module that tests the privacy of AI models.
>> Read more. [2] Plagiarism isn't limited to words.
Programming plagiarism — where a developer copies code deliberately without attribution — is an increasing trend. According to a New York Times article, at Brown University, more than half of the 49 allegations of academic code violations in 2016 involved cheating in computer science. At Stanford, as many as 20% of the students in a 2015 computer science course were flagged for possible cheating, the article reports.
A new
study finds that freely available
AI systems could be used to complete
introductory-level programming assignments without triggering the Measure of Software Similarity. In a paper coauthored by researchers at Booz Allen Hamilton and
EleutherAI, a language model called GPT-J was used to generate code "lacking any particular tells that future plagiarism detection techniques may use to try to identify algorithmically generated code."
[3] It may well be another
clash of the titans" when the
metaverse – such as we understand it now – meets
data privacy. The metaverse wants to harvest new, uncharted personal information, even to the point of noting and analyzing where your eyes go on a screen and how long you gaze at certain products. Data privacy, on the other hand, wants to
protect consumers from this incessant cherry-picking.
It's too early to know what specific protections it will require as usage evolves, but the reality is we're not starting from the most solid foundation. In many jurisdictions, consumers don't yet have the protections they need for today, let alone for the metaverse and the myriad new ways their
data may be used (and abused) tomorrow.
More data means
advertisers have a substantially richer cupboard to mine for far deeper targeting, often using the same platforms that are speaking most loudly about the metaverse's potential.
>> Read more.
No comments:
Post a Comment