(4.9 रेटिंग | 862 वोट )
thumb

When Someone Pays You to Record Calls — and Sell the Data to AI Companies

In the booming era of AI, data is the new "black gold" – but when it comes from your private conversations, the story turns alarming. Recently, the Neon Mobile app shocked the market by soaring to No. 2 on Apple’s App Store, simply by paying users to record calls and selling that data to AI companies. This isn’t just a shocking privacy case but also a reminder: In the AI world, transparency and ethics must come first.

1. Neon – A Controversial Social App on the App Store

In just a few days, Neon Mobile created a sensation on the U.S. App Store. From rank 476 in the Social Networking category on September 18, 2025, the app jumped to 10th place in a single day, and by Wednesday (September 24) it officially took the runner-up spot – No. 2 among the top free social apps. Neon even entered the top 6 overall apps and the top 7 most popular game/apps, according to Appfigures data.

What caused Neon’s explosion? Not ordinary social features, but the promise of "earning hundreds or even thousands of dollars annually" just by allowing your phone calls to be recorded. The unique twist: the app pays users to collect audio data, which it then sells to AI companies to train models. Neon’s popularity reflects a disturbing reality: some users are willing to trade personal privacy for small payouts, without realizing the greater risks to themselves and society.

2. How Neon Works

Neon operates simply but with an enticing hook: Make calls, get recorded, and earn money. Specifically:

  • Payment structure: Users earn $0.30 per minute when calling another Neon user, and up to $30 per day for calls to anyone. The app also offers referral bonuses.
  • Recording and data usage process: According to its Terms of Service, Neon collects both inbound and outbound calls made through its mobile app. Its ads claim to record only one side (the user’s) unless calling another Neon user. This data is sold directly to "AI companies" for developing, training, testing, and improving machine learning models, AI tools, and related technologies.
  • Alarming terms of service: Neon grants itself "global, exclusive, irrevocable, transferable, royalty-free rights" to sell, use, store, transfer, publicly display, perform, reproduce, modify, and distribute your recordings – in whole or in part, across any format or channel, including future technologies. This includes sublicensing through multiple layers. Beta features come without warranties and may be buggy.
  • Data anonymity: Neon claims to remove names, emails, and phone numbers before selling, but does not explain how or how AI partners use the data afterward. In TechCrunch’s tests, the app did not notify users when recording, nor warn call recipients – it functioned like a normal VoIP app.

This model sounds appealing but opens the door to abuse of sensitive voice data, where AI can "learn" from your real voice without clear oversight.

3. Legal and Security Risks

Neon’s existence is not only shocking but raises major questions about legality and safety. Key risks include:

  • Violation of recording laws: In many U.S. states and countries, recording a conversation without consent from both parties is illegal (wiretap laws). Neon tries to dodge this by recording only one side, as noted by legal expert Jennifer Daniels (Blank Rome): "It’s an interesting approach to avoid the law." However, lawyer Peter Jackson (Greenberg Glusker) suspects the "one-sided recording" language may disguise full-call recording, simply omitting the other party’s transcript.
  • Voice cloning and fraud risks: Voice data can be used to clone voices and create fake calls sounding like you, leading to financial or social fraud. Jackson warns: "Once your voice is out there, it can be used for scams. With just your phone number and voice, they have enough to impersonate you."
  • Lack of transparency and security: Neon doesn’t disclose which AI partners buy the data or what they do next. "Anonymous" voice data may not truly be safe, since voices are highly personal identifiers. Moreover, any company can be hacked – and Neon, run by founder Alex Kiam from a New York apartment (with funding from Upfront Ventures), has yet to prove strong security.

These risks affect not only Neon users but also call recipients, unknowingly dragged into the AI data cycle. When voice data falls into the wrong hands, consequences go far beyond privacy loss. That’s why many companies are investing in safe and secure AI voice as a differentiator.

4. Lessons for AI Companies

Neon’s case is not just a tech phenomenon but also a wake-up call for the AI industry on handling user data. Key lessons include:

  • Transparency in data collection: Users must know what is collected, why, and if it’s shared with third parties. Lack of transparency destroys trust, even if the app pays users.
  • Data ownership governance: Control should remain with users. AI companies should allow limited permissions rather than demanding full rights. This is vital for legality and ethics.
  • High security standards and anonymization: Stripping names or numbers isn’t enough. Voice carries subtle identifiers. Companies must apply advanced anonymization, end-to-end encryption, and regular audits.
  • Global compliance: Privacy and recording laws differ worldwide. Global AI companies must comply strictly with GDPR (Europe), U.S. data privacy laws, and emerging Asian regulations.
  • Building trust with AI ethics: As data becomes the "new oil," building an ethical AI framework and publicly committing to it will be a long-term competitive advantage. Users choose not only for features but also for peace of mind.

In this context, AI voice is not just a technology but a commitment to maintaining public trust.

5. Conclusion & The Future of Voice Data in AI

Neon’s story clearly proves that voice data is becoming a "strategic resource" in the AI era. An app climbing to the App Store’s top ranks just by paying for call data shows how valuable real-world audio is for AI training.

In the future, we may see:

  • Voice data marketplaces booming: Companies will build "data markets" to buy/sell recordings for AI voice, chatbots, virtual assistants, and multimodal models.
  • More legal disputes: Countries will tighten privacy laws, especially around voice recording and recognition. Apps like Neon may face investigations or bans.
  • "Responsible AI" standards: Users are increasingly privacy-aware. In the future, only transparent AI firms with certifications and clear policies will retain trust.
  • Widespread applications of voice data: Beyond AI training, voice data will serve healthcare (diagnosing via voice), education (language learning with real speech), and entertainment (virtual characters, multilingual dubbing).

In short, voice data will be the "new oil" of AI. Amid opportunities and risks, safe AI voice will be the decisive factor for sustainable success in the coming decade.

सभी के साथ साझा करें:

टिप्पणी छोड़ें

अगली बार टिप्पणी करने के लिए इस ब्राउज़र में मेरा नाम सहेजें.