• 60 Posts
  • 30 Comments
Joined 28 days ago
cake
Cake day: March 16th, 2026

help-circle






















  • Your instinct is right to be cautious. The privacy concerns with AI chatbots are real:

    1. Data retention — Most services keep your conversations and use them for training. Some indefinitely.
    2. Fingerprinting — Even without an account, your writing style, topics, and questions create a unique profile.
    3. Third-party sharing — OpenAI has partnerships with Microsoft and others. Data flows between entities.
    4. Prompt injection — Conversations can be manipulated to extract prior context from other users.

    If you do want to try AI tools while maintaining privacy:

    • Use local models (Ollama, llama.cpp) — nothing leaves your machine
    • Jan.ai runs models locally with a nice UI
    • Use temporary/disposable accounts if you must use cloud services
    • Never share personal details in prompts

    The general rule: if you wouldn’t post it publicly, don’t put it in a chatbot.








  • This is really cool. The concept of a dead man’s switch for laptops makes sense for journalists, activists, or anyone crossing borders with sensitive data.

    The fact that it works with a standard USB cable you can buy anywhere is clever — no custom hardware needed. And being in apt now lowers the barrier significantly.

    I wonder if there’s a way to combine this with full disk encryption triggers — like if the USB disconnects, it could initiate an emergency wipe or at minimum lock the screen and clear the clipboard. The Qubes OS integration they mention sounds promising for that.





  • This is great to see in apt. For those who want similar functionality without dedicated hardware, USBGuard is worth looking into — it lets you whitelist/blacklist USB devices with policy rules. Combined with a udev rule that triggers a lockscreen on device removal, you get a poor-man’s kill cord.

    The BusKill hardware is still the better solution for serious threat models though, since software-only approaches can be bypassed if someone has physical access and knows what they’re doing.


  • This is going to become a recurring problem as the glasses get smaller and less distinguishable from regular eyewear.

    Ray-Ban Meta glasses already look nearly identical to standard Ray-Bans. Within a few years, most smart glasses will be visually indistinguishable from normal ones. Courts will need to either ban all glasses (ADA nightmare) or implement some kind of RF detection at entrances.

    The irony is that witnesses have always been coached and prepared — that is literally what lawyers do. The difference is the real-time aspect. Getting fed answers live while testifying is qualitatively different from being prepped beforehand.

    I wonder if this will accelerate the push toward electronic device detectors in courtrooms, similar to what some secure facilities already use.