0

AI vs AI: Deepfake and GenAI Defense System for Combating Synthetic Media threat

Not fundedGrant
$0raised

Project summary

In an era where AI is reshaping our digital landscape at an unprecedented pace, we face a critical threat to the very foundations of digital security and trust. The emergence of AI-generated content, particularly deepfakes and manipulated media, has opened Pandora's box of potential misuse and deception. While VC floods into GenAI, the equally crucial field of protection against these remains woefully underfunded and overlooked.

The urgency of addressing this challenge cannot be overstated. Recent incidents paint a alarming picture:

  1. Corporate Fraud: In 2021, a UK-based company fell victim to a sophisticated attack where AI was used to mimic a CEO's voice, resulting in a fraudulent transfer of $243,000.

  2. Financial Scams: Deepfake videos of high-profile figures like Elon Musk have been used to promote fraudulent cryptocurrency schemes, leading to significant financial losses for investors worldwide.

  3. Political Manipulation: A deepfake video circulated showing a French diplomat making false, inflammatory statements, leading to international tensions that required substantial diplomatic efforts to resolve.

  4. Misinformation Campaigns: Former U.S. President Donald Trump reposted AI-generated deepfakes and manipulated images of Taylor Swift to Truth Social, depicting the pop star and "supporters" endorsing him in the upcoming election. This incident showcases how deepfakes can be weaponized for political gain and voter manipulation.

  5. Social Media Deception: We're witnessing a proliferation of deepfake-based influencer accounts on platforms like Instagram. For instance, an account with the handle @aayushiiisharma__ has amassed around 100,000 followers with hyper-realistic AI-generated content. Such accounts could potentially be used to scam followers or deceive brands hoping to advertise with real influencers.

  6. Personal Security Threats: Voice-based scams are on the rise, where criminals use AI to emulate the voices of loved ones, often to request emergency financial aid. A recent case in saw a woman lose Rs 1.4 lakh (approximately $1,700) to such a scam.

The scale of this threat is set to escalate dramatically. Gartner Research predicts that by 2025, a staggering 30% of deepfakes will be indistinguishable from authentic media without specialized detection tools. This forecast underscores the pressing need for advanced detection technologies to safeguard our digital interactions.

What are this project's goals? How will you achieve them?

We're a team of researchers and engineers from IIT Delhi and IIT Roorkee who are developing a state-of-the-art deepfake detection tool for mobile and web, aiming to identify 98-99% of modified media.

Key Features:

  1. Multi-Platform: Both mobile and pc.

  2. High Accuracy: Aiming for a 98-99% detection rate, significantly higher than current industry standards. Yes it is not infeasible.

  3. Real-Time: While you scroll or while you talk.

  4. Continuous Learning: Utilization of machine learning to adapt to new deepfake techniques as they emerge.

  5. Privacy-Focused: Strict adherence to data protection regulations, ensuring user privacy and data security.

  6. That's me, I am an honorary veteran of Air Force.

Well not really!!

We will be offering a 1 year free access to the tool to every contributor with a contribution >= $10 on Manifund. We will send the invites on the same email as used on Manifund to make the contribution.

How will this funding be used?

  • 50% - Research team stipends (We are currently paying through our pockets)

  • 30% - Cloud compute costs (Got minimal credits to survive upon)

  • 10% - Privacy certifications (US and EU)

  • 10% - Operational expenses (coworking space, travel)

Who is on your team? What's your track record on similar projects?

Utsav Singhal -> LinkedIn

Sarthak Gupta -> LinkedIn

4 research interns from IIT Delhi and IIT Roorkee

What are the most likely causes and outcomes if this project fails?

No part of the project would count as fail. Every step is incremental and we will open source the tool if we are not able to deliver the above promised accuracy capability by the end of March 2025.

What other funding are you or your project getting?

$7,715 grant from Entrepreneurship First (incubator). No more promises or funding other than this.

Feel free to ask any questions or clarifications. We are based in HSR, Bangalore. Feel free to visit us at Urban Vault 65, HSR, Bangalore, India. More info: https://satya-img.github.io/

RyanKidd avatar

Ryan Kidd

2 months ago

This seems like a great tool that should definitely exist. In fact, it's so obviously useful, I don't know why you need Manifund! Do you need help applying for VC funding? Are you in urgent need of funds?

Fibonan avatar

Utsav Singhal

2 months ago

@RyanKidd yes we need funds for compute, data and most importantly to keep supporting our research teams' stipends. People like what we are building and we talked to banks, social media companies, dating sites and other companies about it. But they are not willing to pay for a phenomenon which is not a threat currently. Most of them are more focused on providing GenAI usecases to the customers. Similarly VCs won't show interest until they know for sure that this project will earn money and since this is not the FOMO field yet, it's a risky bet for them. Hence we turned to Manifund.

Fibonan avatar

Utsav Singhal

2 months ago

@RyanKidd would love to talk to you more about this. Have sent a linkedin request to you as well.

RyanKidd avatar

Ryan Kidd

2 months ago

@Fibonan if I were a VC, I would bet I could make money off this product. I'm honestly really surprised that YC, Juniper, Metaplanet, Lionheart, Mythos, Polaris, Fifty Years, and Moonshot aren't interested.

RyanKidd avatar

Ryan Kidd

2 months ago

@Fibonan it seems like GetReal Labs, TrueMedia, and Reality Defender have a similar business model. If they can get VC funding, I think you can too!

RyanKidd avatar

Ryan Kidd

2 months ago

@Fibonan the public also seem to care a lot too. Here's an NY Times article about TrueMedia from April 2024.

RyanKidd avatar

Ryan Kidd

2 months ago

@Fibonan And another NY Times article, which says, "More than a dozen companies have popped up to offer services aimed at identifying whether photos, text and videos are made by humans or machines."

🥦

Tony Gao

3 months ago

Do you think it will be possible to run this tool locally?

Fibonan avatar

Utsav Singhal

3 months ago

@TonyGao yes. We ran audio model on OnePlus 12 with 12 GB memory smoothly. We are still trying to optimize image/video models but it doesn't seem infeasible. We want it to smoothly run on edge wherever needed.

🥦

Tony Gao

3 months ago

Can you clarify what you mean by edge: edge servers, or the end-user device? I ask about running the model locally because of the privacy implications.

Fibonan avatar

Utsav Singhal

3 months ago

@TonyGao all of them. Detection models are not as heavy as generative models. So yes, you can run them locally on any device with decent compute.