In short
- Google’s synthide encloses on traceable characters in all AI tools from Google.
- The Vlaggen Tool AI-generated image content with the help of invisible water brands in media.
- It also helps to identify AI-made text and video as worries about cheating grows.
With Deepfakes, wrong information and AI-Assisted Cheating spread online and in classrooms, Google DeepMind unveiled the synthide detector on Tuesday. This new tool scans images, audio, video and text for invisible water brands embedded by the growing Suite of Google of AI models.
Designed to work in one place in multiple sizes, Synthide detector The aims to identify more transparency by identifying AI-generated content made by Google’s AI, including the Audio AIS-Notebooklm, Lyria and Image Generator Imagen, and by emphasizing the portions that are most likely to be marked watermark.
“For text, synthid looks at which words are subsequently generated and the chance of suitable word choices that would not affect the overall text quality and the utility changes,” said Google in a demo presentation.
“If a passage contains more authorities of preferred word choices, Synthid will detect that this is a watermark,” it added.
Synthid adjusts the probability scores of word choices while generating text, where an invisible watermark that has no influence on the meaning or readability of the output. This watermark can be used later to identify content produced by the Gemini app from Google or Web tools.
For the first time, Google introduced synthide watermark in August 2023 as an aid to detect images generated by AI. With the launch of Synthid Detector, Google has expanded this functionality with audio, video and text.
Synthid detector is currently available in limited release and has a waiting list for journalists, educators, designers and researchers to try the program.
As generative AI tools become widespread, educators find it increasingly difficult to determine whether a student’s work is original, even in assignments that are intended to display personal experiences.
Use ai to cheat
A recent report Through Magazine in New York emphasized this growing problem.
A professor of technology -ethics at Santa Clara University has assigned a personal reflection –essay, only to discover that one student had used chatgpt to complete it.
At the University of Arkansas in Little Rock, another professor discovered students who trusted on AI to write their course introduction –essays and class goals.
Despite an increase in students who use his AI model to cheat in the classroom, OpenAI closed its AI detection software in 2023, referring to low accuracy.
“We acknowledge that identifying AI written text has been an important point of discussion with educators, and equally important is to recognize the limits and effects of text classifications generated by AI,” Openai said at the time.
The aggravation of the issue of playing AI is new tools such as Cluely, an application designed to bypass AI detection software. Developed by former Columbia University -Student Roy Lee, Cluely circling AI detection at the desktop level.
Promoted as a way to cheat exams and interviews, Lee raised $ 5.3 million to build the application.
“It blew up after I posted a video of myself to use it during an Amazon interview,” Lee said earlier Decrypt. “While I used it, I realized that the user experience was really interesting – nobody had investigated this idea of a translucent screen overlay that you see screen, hear your audio and behaves like a player two for your computer.”
Despite the promise of tools such as Synthid, many current AI detection methods remain unreliable.
In October a test of the leading AI detectors by Decrypt Discovered that only two of the four leading AI detectors, Grammarly, Quillbot, Gptzero and Zerogpt could determine whether people or AI wrote the American declaration of independence respectively.
Edited by Sebastian Sinclair
Generally intelligent Newsletter
A weekly AI trip told by Gen, a generative AI model.