- AI-DeepFake Technology Increases Scams by 2137% in Last 3 Years
The incidence of deepfakes and AI-based frauds has increased dramatically in recent years. According to a report by Signicate and Consult Hyperion, fraud caused by AI and deepfake has increased by 2137% in the last 3 years. To counter this growing threat, a startup called Reality Defender has developed a new AI tool.
The tool can identify fake faces during video calls in real-time. Christopher Renn, the company's product manager, demonstrated the tool's capabilities by using Elon Musk's deepfake face on a video call. In which the AI tool scanned each frame and gave an alert during the call that this is a fake call. Soon this tool will be seen in the market. After this tool comes, cheating with fake calls will not be easy.
Fight against AI only with the help of AI: Coleman says that we are not against the development of AI. We believe that AI can be used to make people's lives better, whether it's medicine or productivity. Our aim is to address the risks posed by AI. Currently, our project works to tackle the threat of deepfakes for governments and corporates with the help of AI. The company says that the tool will be available to the common man in the near future after increasing accuracy.
How does this new AI technique work? This tool takes the help of AI to detect fake video calls. The tool checks facial changes, such as skin texture, gestures and other aspects. As the tool detects any kind of abnormality, it sends an alert on the screen.
Image Credit: (Divya-Bhaskar): Images/graphics belong to (Divya-Bhaskar).