This seems like a great tool that should definitely exist. In fact, it's so obviously useful, I don't know why you need Manifund! Do you need help applying for VC funding? Are you in urgent need of funds?
In an era where AI is reshaping our digital landscape at an unprecedented pace, we face a critical threat to the very foundations of digital security and trust. The emergence of AI-generated content, particularly deepfakes and manipulated media, has opened Pandora's box of potential misuse and deception. While VC floods into GenAI, the equally crucial field of protection against these remains woefully underfunded and overlooked.
The urgency of addressing this challenge cannot be overstated. Recent incidents paint a alarming picture:
Corporate Fraud: In 2021, a UK-based company fell victim to a sophisticated attack where AI was used to mimic a CEO's voice, resulting in a fraudulent transfer of $243,000.
Financial Scams: Deepfake videos of high-profile figures like Elon Musk have been used to promote fraudulent cryptocurrency schemes, leading to significant financial losses for investors worldwide.
Political Manipulation: A deepfake video circulated showing a French diplomat making false, inflammatory statements, leading to international tensions that required substantial diplomatic efforts to resolve.
Misinformation Campaigns: Former U.S. President Donald Trump reposted AI-generated deepfakes and manipulated images of Taylor Swift to Truth Social, depicting the pop star and "supporters" endorsing him in the upcoming election. This incident showcases how deepfakes can be weaponized for political gain and voter manipulation.
Social Media Deception: We're witnessing a proliferation of deepfake-based influencer accounts on platforms like Instagram. For instance, an account with the handle @aayushiiisharma__ has amassed around 100,000 followers with hyper-realistic AI-generated content. Such accounts could potentially be used to scam followers or deceive brands hoping to advertise with real influencers.
Personal Security Threats: Voice-based scams are on the rise, where criminals use AI to emulate the voices of loved ones, often to request emergency financial aid. A recent case in saw a woman lose Rs 1.4 lakh (approximately $1,700) to such a scam.
The scale of this threat is set to escalate dramatically. Gartner Research predicts that by 2025, a staggering 30% of deepfakes will be indistinguishable from authentic media without specialized detection tools. This forecast underscores the pressing need for advanced detection technologies to safeguard our digital interactions.
We're a team of researchers and engineers from IIT Delhi and IIT Roorkee who are developing a state-of-the-art deepfake detection tool for mobile and web, aiming to identify 98-99% of modified media.
Key Features:
Multi-Platform: Both mobile and pc.
High Accuracy: Aiming for a 98-99% detection rate, significantly higher than current industry standards. Yes it is not infeasible.
Real-Time: While you scroll or while you talk.
Continuous Learning: Utilization of machine learning to adapt to new deepfake techniques as they emerge.
Privacy-Focused: Strict adherence to data protection regulations, ensuring user privacy and data security.
That's me, I am an honorary veteran of Air Force.
Well not really!!
We will be offering a 1 year free access to the tool to every contributor with a contribution >= $10 on Manifund. We will send the invites on the same email as used on Manifund to make the contribution.
50% - Research team stipends (We are currently paying through our pockets)
30% - Cloud compute costs (Got minimal credits to survive upon)
10% - Privacy certifications (US and EU)
10% - Operational expenses (coworking space, travel)
Utsav Singhal -> LinkedIn
Sarthak Gupta -> LinkedIn
4 research interns from IIT Delhi and IIT Roorkee
No part of the project would count as fail. Every step is incremental and we will open source the tool if we are not able to deliver the above promised accuracy capability by the end of March 2025.
$7,715 grant from Entrepreneurship First (incubator). No more promises or funding other than this.
Feel free to ask any questions or clarifications. We are based in HSR, Bangalore. Feel free to visit us at Urban Vault 65, HSR, Bangalore, India. More info: https://satya-img.github.io/
Ryan Kidd
about 1 month ago
This seems like a great tool that should definitely exist. In fact, it's so obviously useful, I don't know why you need Manifund! Do you need help applying for VC funding? Are you in urgent need of funds?
Utsav Singhal
about 1 month ago
@RyanKidd yes we need funds for compute, data and most importantly to keep supporting our research teams' stipends. People like what we are building and we talked to banks, social media companies, dating sites and other companies about it. But they are not willing to pay for a phenomenon which is not a threat currently. Most of them are more focused on providing GenAI usecases to the customers. Similarly VCs won't show interest until they know for sure that this project will earn money and since this is not the FOMO field yet, it's a risky bet for them. Hence we turned to Manifund.
Utsav Singhal
about 1 month ago
@RyanKidd would love to talk to you more about this. Have sent a linkedin request to you as well.
Ryan Kidd
about 1 month ago
@Fibonan if I were a VC, I would bet I could make money off this product. I'm honestly really surprised that YC, Juniper, Metaplanet, Lionheart, Mythos, Polaris, Fifty Years, and Moonshot aren't interested.
Ryan Kidd
about 1 month ago
@Fibonan it seems like GetReal Labs, TrueMedia, and Reality Defender have a similar business model. If they can get VC funding, I think you can too!
Ryan Kidd
about 1 month ago
@Fibonan the public also seem to care a lot too. Here's an NY Times article about TrueMedia from April 2024.
Ryan Kidd
about 1 month ago
@Fibonan And another NY Times article, which says, "More than a dozen companies have popped up to offer services aimed at identifying whether photos, text and videos are made by humans or machines."
Utsav Singhal
about 2 months ago
@TonyGao yes. We ran audio model on OnePlus 12 with 12 GB memory smoothly. We are still trying to optimize image/video models but it doesn't seem infeasible. We want it to smoothly run on edge wherever needed.
Utsav Singhal
about 2 months ago
@TonyGao all of them. Detection models are not as heavy as generative models. So yes, you can run them locally on any device with decent compute.