Here I am sharing the corrected content of a May 2023 email sent to the White House for open inquiries about how to safe guard media (image, audio and text) from deep fakes.
Subject :
In an era in which convincing images, audio, and text can be generated with ease on a massive scale, how can we ensure reliable access to verifiable, trustworthy information? How can we be certain that a particular piece of media is genuinely from the claimed source?
Here are some ideas or concepts I'd like to give you to work on the matter.
Firstly, it is possible to ask or legally force the firms proposing IA generating text, images or audio to "watermark" the created content.Â
This could be done simply (or in urgency) by adding an invisible prompt next to the user's prompt (this is the easiest technical way). Concretely for images, adding an invisible prompt such as [user prompt] + [and generating this image in such a manner that anyone can know it have been done by AI] or [user prompt] + [and adding a clear text banner that let anyone know that it have been created by AI].
Same goes for text or audio in their own declined version. For example using ACSII characters between each sentences or paragraphs for generated text, or a looped gimmick (continuous or not) above the track for generated audio.Another mean would be to make this watermark more discreet, for example by legally forcing these firms to hide steganographied signatures that shows that the content — whatever it is — have been generated by AI. Such as a hidden pixel layer or a hidden audio sequence.
This is for the AI generated content part, now we will speak about human generated content : the other part that has to be protected and secured. This goes essentially for news channels, would it be WebTV or official TV channels.Â
What we must admit is that, as in cyber-security, we can leverage the security level but cannot ensure 100% security against deep fakes. Therefore, the techniques used to evade deep fakes by public actors must be dynamics — meaning they have to evolve and react with new threats appearing and emerging.Here are two ideas to ensure a first robust layer of security :
Legally force TV channels to keep date and hours visible at all time on their stream. On a default banner, a dedicated banner or in the corners. Why ? Because it allows anyone that sees out of context sequences of the show to track back the date and time of emission and eventually find the original sequence back.
Another mean of action against deep fake targeting TV channels would be to keep a QR code visible on the live broadcast (in a corner for example), linking to the official website where the show (or sequences of the show) would be visible again. With this, even faked sequences would be nullified.
Keep in mind that these techniques could be effective only if they are deployed on most, if not all of the channels (mass effect).
[ Salutations ]
Quote from article from August 2024 : En 6 mois seulement, ces 8 ingénieurs ont créé une IA made in France meilleure que ChatGPT à laquelle tout le monde aura accès bientôt
Kyutai travaille activement sur le renforcement de la sécurité et le marquage invisible des contenus générés par Moshi.
#Key:Risk